Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.

Slides:



Advertisements
Similar presentations
Lecture 18: Link analysis
Advertisements

Matrices, Digraphs, Markov Chains & Their Use by Google Leslie Hogben Iowa State University and American Institute of Mathematics Leslie Hogben Iowa State.
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
Link Analysis: PageRank
Link Analysis David Kauchak cs160 Fall 2009 adapted from:
How PageRank Works Ketan Mayer-Patel University of North Carolina January 31, 2011.
More on Rankings. Query-independent LAR Have an a-priori ordering of the web pages Q: Set of pages that contain the keywords in the query q Present the.
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 March 23, 2005
Hinrich Schütze and Christina Lioma Lecture 12: Language Models for IR
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
1 CS 430 / INFO 430: Information Retrieval Lecture 16 Web Search 2.
Multimedia Databases SVD II. Optimality of SVD Def: The Frobenius norm of a n x m matrix M is (reminder) The rank of a matrix M is the number of independent.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 April 2, 2006
15-853Page :Algorithms in the Real World Indexing and Searching III (well actually II) – Link Analysis – Near duplicate removal.
Link Analysis, PageRank and Search Engines on the Web
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
INF 2914 Web Search Lecture 4: Link Analysis Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
CS347 Lecture 6 April 25, 2001 ©Prabhakar Raghavan.
Prestige (Seeley, 1949; Brin & Page, 1997; Kleinberg,1997) Use edge-weighted, directed graphs to model social networks Status/Prestige In-degree is a good.
Information Retrieval Link-based Ranking. Ranking is crucial… “.. From our experimental data, we could observe that the top 20% of the pages with the.
CS246 Link-Based Ranking. Problems of TFIDF Vector  Works well on small controlled corpus, but not on the Web  Top result for “American Airlines” query:
Motivation When searching for information on the WWW, user perform a query to a search engine. The engine return, as the query’s result, a list of Web.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Λ14 Διαδικτυακά Κοινωνικά Δίκτυα και Μέσα
Presented By: - Chandrika B N
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 21 Link analysis.
PrasadL17LinkAnalysis1 Link Analysis Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford) and Christopher Manning (Stanford)
ITCS 6265 Lecture 17 Link Analysis This lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
Lecture 14: Link Analysis
CS276 Lecture 18 Link Analysis Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar Raghavan Lecture 18: Link analysis.
Hyperlink Analysis for the Web. Information Retrieval Input: Document collection Goal: Retrieve documents or text with information content that is relevant.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 21: Link analysis.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
CS315 – Link Analysis Three generations of Search Engines Anchor text Link analysis for ranking Pagerank HITS.
Web Search and Tex Mining Lecture 9 Link Analysis.
PageRank. s1s1 p 12 p 21 s2s2 s3s3 p 31 s4s4 p 41 p 34 p 42 p 13 x 1 = p 21 p 34 p 41 + p 34 p 42 p 21 + p 21 p 31 p 41 + p 31 p 42 p 21 / Σ x 2 = p 31.
CS349 – Link Analysis 1. Anchor text 2. Link analysis for ranking 2.1 Pagerank 2.2 Pagerank variants 2.3 HITS.
Link Analysis Rong Jin. Web Structure  Web is a graph Each web site correspond to a node A link from one site to another site forms a directed edge 
Introduction to Information Retrieval Outline ❶ Support Vector Machines ❷ Issues in the classification of text documents 1.
Ranking Link-based Ranking (2° generation) Reading 21.
COMP4210 Information Retrieval and Search Engines Lecture 9: Link Analysis.
Understanding Google’s PageRank™ 1. Review: The Search Engine 2.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 20 11/3/2011.
Web-Mining Agents Community Analysis Tanya Braun Universität zu Lübeck Institut für Informationssysteme.
Information Retrieval and Web Search Link analysis Instructor: Rada Mihalcea (Note: This slide set was adapted from an IR course taught by Prof. Chris.
1 CS 430: Information Discovery Lecture 5 Ranking.
CompSci 100E 4.1 Google’s PageRank web site xxx web site yyyy web site a b c d e f g web site pdq pdq.. web site yyyy web site a b c d e f g web site xxx.
Link Analysis Algorithms Page Rank Slides from Stanford CS345, slightly modified.
CS 540 Database Management Systems Web Data Management some slides are due to Kevin Chang 1.
1 CS 430 / INFO 430: Information Retrieval Lecture 20 Web Search 2.
Web Mining Link Analysis Algorithms Page Rank. Ranking web pages  Web pages are not equally “important” v  Inlinks.
Jeffrey D. Ullman Stanford University.  Web pages are important if people visit them a lot.  But we can’t watch everybody using the Web.  A good surrogate.
Modified by Dongwon Lee from slides by
Quality of a search engine
Search Engines and Link Analysis on the Web
Link analysis and Page Rank Algorithm
Aidan Hogan CC Procesamiento Masivo de Datos Otoño 2017 Lecture 7: Information Retrieval II Aidan Hogan
Link-Based Ranking Seminar Social Media Mining University UC3M
Aidan Hogan CC Procesamiento Masivo de Datos Otoño 2018 Lecture 7 Information Retrieval: Ranking Aidan Hogan
Lecture 22 SVD, Eigenvector, and Web Search
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Junghoo “John” Cho UCLA
Lecture 22 SVD, Eigenvector, and Web Search
Presentation transcript:

Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis

Introduction to Information Retrieval 2 The web as a directed graph

Introduction to Information Retrieval 3 The web as a directed graph

Introduction to Information Retrieval 4 The web as a directed graph  Assumption 1: A hyperlink is a quality signal.

Introduction to Information Retrieval 5 The web as a directed graph  Assumption 1: A hyperlink is a quality signal.  The hyperlink d 1 → d 2 indicates that d 1 ‘s author deems d 2 high-quality and relevant.

Introduction to Information Retrieval 6 The web as a directed graph  Assumption 1: A hyperlink is a quality signal.  The hyperlink d 1 → d 2 indicates that d 1 ‘s author deems d 2 high-quality and relevant.  Assumption 2: The anchor text describes the content of d 2.

Introduction to Information Retrieval 7 The web as a directed graph  Assumption 1: A hyperlink is a quality signal.  The hyperlink d 1 → d 2 indicates that d 1 ‘s author deems d 2 high-quality and relevant.  Assumption 2: The anchor text describes the content of d 2.  We use anchor text somewhat loosely here for: the text surrounding the hyperlink.

Introduction to Information Retrieval 8 The web as a directed graph  Assumption 1: A hyperlink is a quality signal.  The hyperlink d 1 → d 2 indicates that d 1 ‘s author deems d 2 high-quality and relevant.  Assumption 2: The anchor text describes the content of d 2.  We use anchor text somewhat loosely here for: the text surrounding the hyperlink.  Example: “You can find cheap cars ˂a href = ˂/a ˃. ”

Introduction to Information Retrieval 9 The web as a directed graph  Assumption 1: A hyperlink is a quality signal.  The hyperlink d 1 → d 2 indicates that d 1 ‘s author deems d 2 high-quality and relevant.  Assumption 2: The anchor text describes the content of d 2.  We use anchor text somewhat loosely here for: the text surrounding the hyperlink.  Example: “You can find cheap cars ˂a href = ˂/a ˃. ”  Anchor text: “You can find cheap here”

Introduction to Information Retrieval 10 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]

Introduction to Information Retrieval 11 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.

Introduction to Information Retrieval 12 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM

Introduction to Information Retrieval 13 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM  Matches IBM’s copyright page

Introduction to Information Retrieval 14 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM  Matches IBM’s copyright page  Matches many spam pages

Introduction to Information Retrieval 15 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM  Matches IBM’s copyright page  Matches many spam pages  Matches IBM wikipedia article

Introduction to Information Retrieval 16 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM  Matches IBM’s copyright page  Matches many spam pages  Matches IBM wikipedia article  May not match IBM home page!

Introduction to Information Retrieval 17 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM  Matches IBM’s copyright page  Matches many spam pages  Matches IBM wikipedia article  May not match IBM home page!  … if IBM home page is mostly graphics

Introduction to Information Retrieval 18 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM  Matches IBM’s copyright page  Matches many spam pages  Matches IBM wikipedia article  May not match IBM home page!  … if IBM home page is mostly graphics  Searching on [anchor text → d 2 ] is better for the query IBM.

Introduction to Information Retrieval 19 [text of d 2 ] only vs. [text of d 2 ] + [anchor text → d 2 ]  Searching on [text of d 2 ] + [anchor text → d 2 ] is often more effective than searching on [text of d 2 ] only.  Example: Query IBM  Matches IBM’s copyright page  Matches many spam pages  Matches IBM wikipedia article  May not match IBM home page!  … if IBM home page is mostly graphics  Searching on [anchor text → d 2 ] is better for the query IBM.  In this representation, the page with most occurences of IBM is

Introduction to Information Retrieval 20 Anchor text containing IBM pointing to

Introduction to Information Retrieval 21 Anchor text containing IBM pointing to

Introduction to Information Retrieval 22 Indexing anchor text

Introduction to Information Retrieval 23 Indexing anchor text  Thus: Anchor text is often a better description of a page’s content than the page itself.

Introduction to Information Retrieval 24 Indexing anchor text  Thus: Anchor text is often a better description of a page’s content than the page itself.  Anchor text can be weighted more highly than document text. (based on Assumption 1&2)

Introduction to Information Retrieval 25 Google bombs

Introduction to Information Retrieval 26 Google bombs  A Google bomb is a search with “bad” results due to maliciously manipulated anchor text.

Introduction to Information Retrieval 27 Google bombs  A Google bomb is a search with “bad” results due to maliciously manipulated anchor text.  Google introduced a new weighting function in January 2007 that fixed many Google bombs.

Introduction to Information Retrieval 28 Google bombs  A Google bomb is a search with “bad” results due to maliciously manipulated anchor text.  Google introduced a new weighting function in January 2007 that fixed many Google bombs.  Still some remnants: [dangerous cult] on Google, Bing, Yahoo

Introduction to Information Retrieval 29 Google bombs  A Google bomb is a search with “bad” results due to maliciously manipulated anchor text.  Google introduced a new weighting function in January 2007 that fixed many Google bombs.  Still some remnants: [dangerous cult] on Google, Bing, Yahoo  Coordinated link creation by those who dislike the Church of Scientology

Introduction to Information Retrieval 30 Google bombs  A Google bomb is a search with “bad” results due to maliciously manipulated anchor text.  Google introduced a new weighting function in January 2007 that fixed many Google bombs.  Still some remnants: [dangerous cult] on Google, Bing, Yahoo  Coordinated link creation by those who dislike the Church of Scientology  Defused Google bombs: [dumb motherf…], [who is a failure?], [evil empire]

Introduction to Information Retrieval 31 Origins of PageRank: Citation analysis (1)

Introduction to Information Retrieval 32 Origins of PageRank: Citation analysis (1)  Citation analysis: analysis of citations in the scientific literature.

Introduction to Information Retrieval 33 Origins of PageRank: Citation analysis (1)  Citation analysis: analysis of citations in the scientific literature.  Example citation: “Miller (2001) has shown that physical activity alters the metabolism of estrogens.”

Introduction to Information Retrieval 34 Origins of PageRank: Citation analysis (1)  Citation analysis: analysis of citations in the scientific literature.  Example citation: “Miller (2001) has shown that physical activity alters the metabolism of estrogens.”  We can view “Miller (2001)” as a hyperlink linking two scientific articles.

Introduction to Information Retrieval 35 Origins of PageRank: Citation analysis (1)  Citation analysis: analysis of citations in the scientific literature.  Example citation: “Miller (2001) has shown that physical activity alters the metabolism of estrogens.”  We can view “Miller (2001)” as a hyperlink linking two scientific articles.  One application of these “hyperlinks” in the scientific literature:

Introduction to Information Retrieval 36 Origins of PageRank: Citation analysis (1)  Citation analysis: analysis of citations in the scientific literature.  Example citation: “Miller (2001) has shown that physical activity alters the metabolism of estrogens.”  We can view “Miller (2001)” as a hyperlink linking two scientific articles.  One application of these “hyperlinks” in the scientific literature:  Measure the similarity of two articles by the overlap of other articles citing them.

Introduction to Information Retrieval 37 Origins of PageRank: Citation analysis (1)  Citation analysis: analysis of citations in the scientific literature.  Example citation: “Miller (2001) has shown that physical activity alters the metabolism of estrogens.”  We can view “Miller (2001)” as a hyperlink linking two scientific articles.  One application of these “hyperlinks” in the scientific literature:  Measure the similarity of two articles by the overlap of other articles citing them.  This is called cocitation similarity.

Introduction to Information Retrieval 38 Origins of PageRank: Citation analysis (1)  Citation analysis: analysis of citations in the scientific literature.  Example citation: “Miller (2001) has shown that physical activity alters the metabolism of estrogens.”  We can view “Miller (2001)” as a hyperlink linking two scientific articles.  One application of these “hyperlinks” in the scientific literature:  Measure the similarity of two articles by the overlap of other articles citing them.  This is called cocitation similarity.  Cocitation similarity on the web: Google’s “find pages like this” or “Similar” feature.

Introduction to Information Retrieval 39 Origins of PageRank: Citation analysis (2)

Introduction to Information Retrieval 40 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.

Introduction to Information Retrieval 41 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.  Simplest measure: Each article gets one vote – not very accurate.

Introduction to Information Retrieval 42 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.  Simplest measure: Each article gets one vote – not very accurate.  On the web: citation frequency = inlink count

Introduction to Information Retrieval 43 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.  Simplest measure: Each article gets one vote – not very accurate.  On the web: citation frequency = inlink count  A high inlink count does not necessarily mean high quality...

Introduction to Information Retrieval 44 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.  Simplest measure: Each article gets one vote – not very accurate.  On the web: citation frequency = inlink count  A high inlink count does not necessarily mean high quality... ... mainly because of link spam.

Introduction to Information Retrieval 45 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.  Simplest measure: Each article gets one vote – not very accurate.  On the web: citation frequency = inlink count  A high inlink count does not necessarily mean high quality... ... mainly because of link spam.  Better measure: weighted citation frequency or citation rank

Introduction to Information Retrieval 46 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.  Simplest measure: Each article gets one vote – not very accurate.  On the web: citation frequency = inlink count  A high inlink count does not necessarily mean high quality... ... mainly because of link spam.  Better measure: weighted citation frequency or citation rank  An article’s vote is weighted according to its citation impact.

Introduction to Information Retrieval 47 Origins of PageRank: Citation analysis (2)  Another application: Citation frequency can be used to measure the impact of an article.  Simplest measure: Each article gets one vote – not very accurate.  On the web: citation frequency = inlink count  A high inlink count does not necessarily mean high quality... ... mainly because of link spam.  Better measure: weighted citation frequency or citation rank  An article’s vote is weighted according to its citation impact.  Circular? No: can be formalized in a well-defined way.

Introduction to Information Retrieval 48 Origins of PageRank: Citation analysis (3)

Introduction to Information Retrieval 49 Origins of PageRank: Citation analysis (3)  Better measure: weighted citation frequency or citation rank.

Introduction to Information Retrieval 50 Origins of PageRank: Citation analysis (3)  Better measure: weighted citation frequency or citation rank.  This is basically PageRank.

Introduction to Information Retrieval 51 Origins of PageRank: Citation analysis (3)  Better measure: weighted citation frequency or citation rank.  This is basically PageRank.  PageRank was invented in the context of citation analysis by Pinsker and Narin in the 1960s.

Introduction to Information Retrieval 52 Origins of PageRank: Citation analysis (3)  Better measure: weighted citation frequency or citation rank.  This is basically PageRank.  PageRank was invented in the context of citation analysis by Pinsker and Narin in the 1960s.  Citation analysis is a big deal: The budget and salary of this lecturer are / will be determined by the impact of his publications!

Introduction to Information Retrieval 53 Origins of PageRank: Summary

Introduction to Information Retrieval 54 Origins of PageRank: Summary  We can use the same formal representation for

Introduction to Information Retrieval 55 Origins of PageRank: Summary  We can use the same formal representation for  citations in the scientific literature

Introduction to Information Retrieval 56 Origins of PageRank: Summary  We can use the same formal representation for  citations in the scientific literature  hyperlinks on the web

Introduction to Information Retrieval 57 Origins of PageRank: Summary  We can use the same formal representation for  citations in the scientific literature  hyperlinks on the web  Appropriately weighted citation frequency is an excellent measure of quality...

Introduction to Information Retrieval 58 Origins of PageRank: Summary  We can use the same formal representation for  citations in the scientific literature  hyperlinks on the web  Appropriately weighted citation frequency is an excellent measure of quality... ... both for web pages and for scientific publications.

Introduction to Information Retrieval 59 Origins of PageRank: Summary  We can use the same formal representation for  citations in the scientific literature  hyperlinks on the web  Appropriately weighted citation frequency is an excellent measure of quality... ... both for web pages and for scientific publications.  Next: PageRank algorithm for computing weighted citation frequency on the web.

Introduction to Information Retrieval 60 Model behind PageRank: Random walk

Introduction to Information Retrieval 61 Model behind PageRank: Random walk  Imagine a web surfer doing a random walk on the web

Introduction to Information Retrieval 62 Model behind PageRank: Random walk  Imagine a web surfer doing a random walk on the web  Start at a random page

Introduction to Information Retrieval 63 Model behind PageRank: Random walk  Imagine a web surfer doing a random walk on the web  Start at a random page  At each step, go out of the current page along one of the links on that page, equiprobably

Introduction to Information Retrieval 64 Model behind PageRank: Random walk  Imagine a web surfer doing a random walk on the web  Start at a random page  At each step, go out of the current page along one of the links on that page, equiprobably  In the steady state, each page has a long-term visit rate.

Introduction to Information Retrieval 65 Model behind PageRank: Random walk  Imagine a web surfer doing a random walk on the web  Start at a random page  At each step, go out of the current page along one of the links on that page, equiprobably  In the steady state, each page has a long-term visit rate.  This long-term visit rate is the page’s PageRank.

Introduction to Information Retrieval 66 Model behind PageRank: Random walk  Imagine a web surfer doing a random walk on the web  Start at a random page  At each step, go out of the current page along one of the links on that page, equiprobably  In the steady state, each page has a long-term visit rate.  This long-term visit rate is the page’s PageRank.  PageRank = long-term visit rate = steady state probability.

Introduction to Information Retrieval 67 Formalization of random walk: Markov chains

Introduction to Information Retrieval 68 Formalization of random walk: Markov chains  A Markov chain consists of N states, plus an N ×N transition probability matrix P.

Introduction to Information Retrieval 69 Formalization of random walk: Markov chains  A Markov chain consists of N states, plus an N ×N transition probability matrix P.  state = page

Introduction to Information Retrieval 70 Formalization of random walk: Markov chains  A Markov chain consists of N states, plus an N ×N transition probability matrix P.  state = page  At each step, we are on exactly one of the pages.

Introduction to Information Retrieval 71 Formalization of random walk: Markov chains  A Markov chain consists of N states, plus an N ×N transition probability matrix P.  state = page  At each step, we are on exactly one of the pages.  For 1 ≤ i, j ≥ N, the matrix entry P ij tells us the probability of j being the next page, given we are currently on page i.  Clearly, for all i,

Introduction to Information Retrieval Example web graph

Introduction to Information Retrieval 73 Link matrix for example

Introduction to Information Retrieval 74 Link matrix for example d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 d6d6 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval 75 Transition probability matrix P for example

Introduction to Information Retrieval 76 Transition probability matrix P for example d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 d6d6 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval 77 Long-term visit rate

Introduction to Information Retrieval 78 Long-term visit rate  Recall: PageRank = long-term visit rate.

Introduction to Information Retrieval 79 Long-term visit rate  Recall: PageRank = long-term visit rate.  Long-term visit rate of page d is the probability that a web surfer is at page d at a given point in time.

Introduction to Information Retrieval 80 Long-term visit rate  Recall: PageRank = long-term visit rate.  Long-term visit rate of page d is the probability that a web surfer is at page d at a given point in time.  Next: what properties must hold of the web graph for the long-term visit rate to be well defined?

Introduction to Information Retrieval 81 Long-term visit rate  Recall: PageRank = long-term visit rate.  Long-term visit rate of page d is the probability that a web surfer is at page d at a given point in time.  Next: what properties must hold of the web graph for the long-term visit rate to be well defined?  The web graph must correspond to an ergodic Markov chain.

Introduction to Information Retrieval 82 Long-term visit rate  Recall: PageRank = long-term visit rate.  Long-term visit rate of page d is the probability that a web surfer is at page d at a given point in time.  Next: what properties must hold of the web graph for the long-term visit rate to be well defined?  The web graph must correspond to an ergodic Markov chain.  First a special case: The web graph must not contain dead ends.

Introduction to Information Retrieval 83 Dead ends

Introduction to Information Retrieval 84 Dead ends

Introduction to Information Retrieval 85 Dead ends  The web is full of dead ends.

Introduction to Information Retrieval 86 Dead ends  The web is full of dead ends.  Random walk can get stuck in dead ends.

Introduction to Information Retrieval 87 Dead ends  The web is full of dead ends.  Random walk can get stuck in dead ends.  If there are dead ends, long-term visit rates are not well-defined (or non-sensical).

Introduction to Information Retrieval 88 Teleporting – to get us of dead ends

Introduction to Information Retrieval 89 Teleporting – to get us of dead ends  At a dead end, jump to a random web page with prob. 1/ N.

Introduction to Information Retrieval 90 Teleporting – to get us of dead ends  At a dead end, jump to a random web page with prob. 1/ N.  At a non-dead end, with probability 10%, jump to a random web page (to each with a probability of 0.1/N ).

Introduction to Information Retrieval 91 Teleporting – to get us of dead ends  At a dead end, jump to a random web page with prob. 1/ N.  At a non-dead end, with probability 10%, jump to a random web page (to each with a probability of 0.1/N ).  With remaining probability (90%), go out on a random hyperlink.

Introduction to Information Retrieval 92 Teleporting – to get us of dead ends  At a dead end, jump to a random web page with prob. 1/ N.  At a non-dead end, with probability 10%, jump to a random web page (to each with a probability of 0.1/N ).  With remaining probability (90%), go out on a random hyperlink.  For example, if the page has 4 outgoing links: randomly choose one with probability (1-0.10)/4=0.225

Introduction to Information Retrieval 93 Teleporting – to get us of dead ends  At a dead end, jump to a random web page with prob. 1/ N.  At a non-dead end, with probability 10%, jump to a random web page (to each with a probability of 0.1/N ).  With remaining probability (90%), go out on a random hyperlink.  For example, if the page has 4 outgoing links: randomly choose one with probability (1-0.10)/4=0.225  10% is a parameter, the teleportation rate.

Introduction to Information Retrieval 94 Teleporting – to get us of dead ends  At a dead end, jump to a random web page with prob. 1/ N.  At a non-dead end, with probability 10%, jump to a random web page (to each with a probability of 0.1/N ).  With remaining probability (90%), go out on a random hyperlink.  For example, if the page has 4 outgoing links: randomly choose one with probability (1-0.10)/4=0.225  10% is a parameter, the teleportation rate.  Note: “jumping” from dead end is independent of teleportation rate.

Introduction to Information Retrieval 95 Result of teleporting

Introduction to Information Retrieval 96 Result of teleporting  With teleporting, we cannot get stuck in a dead end.

Introduction to Information Retrieval 97 Result of teleporting  With teleporting, we cannot get stuck in a dead end.  But even without dead ends, a graph may not have well- defined long-term visit rates.

Introduction to Information Retrieval 98 Result of teleporting  With teleporting, we cannot get stuck in a dead end.  But even without dead ends, a graph may not have well- defined long-term visit rates.  More generally, we require that the Markov chain be ergodic.

Introduction to Information Retrieval 99 Ergodic Markov chains  A Markov chain is ergodic if it is irreducible and aperiodic.

Introduction to Information Retrieval 100 Ergodic Markov chains  A Markov chain is ergodic if it is irreducible and aperiodic.  Irreducibility. Roughly: there is a path from any other page.

Introduction to Information Retrieval 101 Ergodic Markov chains  A Markov chain is ergodic if it is irreducible and aperiodic.  Irreducibility. Roughly: there is a path from any other page.  Aperiodicity. Roughly: The pages cannot be partitioned such that the random walker visits the partitions sequentially.

Introduction to Information Retrieval 102 Ergodic Markov chains  A Markov chain is ergodic if it is irreducible and aperiodic.  Irreducibility. Roughly: there is a path from any other page.  Aperiodicity. Roughly: The pages cannot be partitioned such that the random walker visits the partitions cyclically.  A non-ergodic Markov chain:

Introduction to Information Retrieval 103 Ergodic Markov chains  Theorem: For any ergodic Markov chain, there is a unique long-term visit rate for each state.

Introduction to Information Retrieval 104 Ergodic Markov chains  Theorem: For any ergodic Markov chain, there is a unique long-term visit rate for each state.  This is the steady-state probability distribution.

Introduction to Information Retrieval 105 Ergodic Markov chains  Theorem: For any ergodic Markov chain, there is a unique long-term visit rate for each state.  This is the steady-state probability distribution.  Over a long time period, we visit each state in proportion to this rate.

Introduction to Information Retrieval 106 Ergodic Markov chains  Theorem: For any ergodic Markov chain, there is a unique long-term visit rate for each state.  This is the steady-state probability distribution.  Over a long time period, we visit each state in proportion to this rate.  It doesn’t matter where we start.

Introduction to Information Retrieval 107 Ergodic Markov chains  Theorem: For any ergodic Markov chain, there is a unique long-term visit rate for each state.  This is the steady-state probability distribution.  Over a long time period, we visit each state in proportion to this rate.  It doesn’t matter where we start.  Teleporting makes the web graph ergodic.

Introduction to Information Retrieval 108 Ergodic Markov chains  Theorem: For any ergodic Markov chain, there is a unique long-term visit rate for each state.  This is the steady-state probability distribution.  Over a long time period, we visit each state in proportion to this rate.  It doesn’t matter where we start.  Teleporting makes the web graph ergodic.  Web-graph+teleporting has a steady-state probability distribution.

Introduction to Information Retrieval 109 Ergodic Markov chains  Theorem: For any ergodic Markov chain, there is a unique long-term visit rate for each state.  This is the steady-state probability distribution.  Over a long time period, we visit each state in proportion to this rate.  It doesn’t matter where we start.  Teleporting makes the web graph ergodic.  Web-graph+teleporting has a steady-state probability distribution.  Each page in the web-graph+teleporting has a PageRank.

Introduction to Information Retrieval 110 Formalization of “visit”: Probability vector

Introduction to Information Retrieval 111 Formalization of “visit”: Probability vector  A probability (row) vector x = (x 1,..., x N ) tells us where the random walk is at any point.

Introduction to Information Retrieval 112 Formalization of “visit”: Probability vector  A probability (row) vector x = (x 1,..., x N ) tells us where the random walk is at any point.  Example: (000…1…000) 123…i…N-2N-1N

Introduction to Information Retrieval 113 Formalization of “visit”: Probability vector  A probability (row) vector x = (x 1,..., x N ) tells us where the random walk is at any point.  Example:  More generally: the random walk is on the page i with probability x i. (000…1…000) 123…i…N-2N-1N

Introduction to Information Retrieval 114 Formalization of “visit”: Probability vector  A probability (row) vector x = (x 1,..., x N ) tells us where the random walk is at any point.  Example:  More generally: the random walk is on the page i with probability x i.  Example: (000…1…000) 123…i…N-2N-1N ( …0.2… ) 123…i…N-2N-1N

Introduction to Information Retrieval 115 Formalization of “visit”: Probability vector  A probability (row) vector x = (x 1,..., x N ) tells us where the random walk is at any point.  Example:  More generally: the random walk is on the page i with probability x i.  Example:   x i = 1 (000…1…000) 123…i…N-2N-1N ( …0.2… ) 123…i…N-2N-1N

Introduction to Information Retrieval 116 Change in probability vector  If the probability vector is x = (x 1,..., x N ), at this step, what is it at the next step?

Introduction to Information Retrieval 117 Change in probability vector  If the probability vector is x = (x 1,..., x N ), at this step, what is it at the next step?  Recall that row i of the transition probability matrix P tells us where we go next from state i.

Introduction to Information Retrieval 118 Change in probability vector  If the probability vector is x = (x 1,..., x N ), at this step, what is it at the next step?  Recall that row i of the transition probability matrix P tells us where we go next from state i.  So from x, our next state is distributed as xP.

Introduction to Information Retrieval 119 Steady state in vector notation

Introduction to Information Retrieval 120 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.

Introduction to Information Retrieval 121 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.  (We use  to distinguish it from the notation for the probability vector x.)

Introduction to Information Retrieval 122 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.  (We use  to distinguish it from the notation for the probability vector x.)   is the long-term visit rate (or PageRank) of page i.

Introduction to Information Retrieval 123 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.  (We use  to distinguish it from the notation for the probability vector x.)   is the long-term visit rate (or PageRank) of page i.  So we can think of PageRank as a very long vector – one entry per page.

Introduction to Information Retrieval 124 Steady-state distribution: Example

Introduction to Information Retrieval 125 Steady-state distribution: Example  What is the PageRank / steady state in this example?

Introduction to Information Retrieval 126 Steady-state distribution: Example

Introduction to Information Retrieval 127 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t PageRank vector =  = (  ,   ) = (0.25, 0.75) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 128 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t PageRank vector =  = (  ,   ) = (0.25, 0.75) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 129 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t PageRank vector =  = (  ,   ) = (0.25, 0.75) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 130 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t (convergence) PageRank vector =  = (  ,   ) = (0.25, 0.75) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 131 How do we compute the steady state vector?

Introduction to Information Retrieval 132 How do we compute the steady state vector?  In other words: how do we compute PageRank?

Introduction to Information Retrieval 133 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...

Introduction to Information Retrieval 134 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.

Introduction to Information Retrieval 135 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!

Introduction to Information Retrieval 136 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P

Introduction to Information Retrieval 137 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P  Solving this matrix equation gives us .

Introduction to Information Retrieval 138 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P  Solving this matrix equation gives us .   is the principal left eigenvector for P …

Introduction to Information Retrieval 139 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P  Solving this matrix equation gives us .   is the principal left eigenvector for P …  … that is,  is the left eigenvector with the largest eigenvalue.

Introduction to Information Retrieval 140 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P  Solving this matrix equation gives us .   is the principal left eigenvector for P …  … that is,  is the left eigenvector with the largest eigenvalue.  All transition probability matrices have largest eigenvalue 1.

Introduction to Information Retrieval 141 One way of computing the PageRank 

Introduction to Information Retrieval 142 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution

Introduction to Information Retrieval 143 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.

Introduction to Information Retrieval 144 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.

Introduction to Information Retrieval 145 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.

Introduction to Information Retrieval 146 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.

Introduction to Information Retrieval 147 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.  This is called the power method.

Introduction to Information Retrieval 148 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.  This is called the power method.  Recall: regardless of where we start, we eventually reach the steady state 

Introduction to Information Retrieval 149 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.  This is called the power method.  Recall: regardless of where we start, we eventually reach the steady state   Thus: we will eventually (in asymptotia) reach the steady state.

Introduction to Information Retrieval 150 Power method: Example

Introduction to Information Retrieval 151 Power method: Example  What is the PageRank / steady state in this example?

Introduction to Information Retrieval 152 Computing PageRank: Power Example

Introduction to Information Retrieval 153 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 01= xP t1t1 = xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 154 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t1 = xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 155 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 156 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 157 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 158 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 159 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 160 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 161 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 162 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 163 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 164 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t = xP t1t = xP 2 t2t = xP 3 t3t = xP 4... t∞t∞ = xP ∞ PageRank vector =  = (  ,   ) = (0.25, 0.75) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 165 Power method: Example  What is the PageRank / steady state in this example?  The steady state distribution (= the PageRanks) in this example are 0.25 for d 1 and 0.75 for d 2.

Introduction to Information Retrieval 166 Exercise: Compute PageRank using power method

Introduction to Information Retrieval 167 Exercise: Compute PageRank using power method

Introduction to Information Retrieval 168 Solution

Introduction to Information Retrieval 169 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t0 01 t1t1 t2t2 t3t3 t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 170 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t1 t2t2 t3t3 t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 171 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t2 t3t3 t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 172 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t2 t3t3 t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 173 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t t3t3 t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 174 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t t3t3 t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 175 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t t3t t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 176 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t t3t t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 177 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t t3t t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 178 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t t3t t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 179 Solution x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.7 P 21 = 0.2 P 12 = 0.3 P 22 = 0.8 t0t t1t t2t t3t t∞t∞ PageRank vector =  = (  ,   ) = (0.4, 0.6) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

Introduction to Information Retrieval 180 PageRank summary

Introduction to Information Retrieval 181 PageRank summary  Preprocessing

Introduction to Information Retrieval 182 PageRank summary  Preprocessing  Given graph of links, build matrix P

Introduction to Information Retrieval 183 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation

Introduction to Information Retrieval 184 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation  From modified matrix, compute 

Introduction to Information Retrieval 185 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation  From modified matrix, compute    i is the PageRank of page i.

Introduction to Information Retrieval 186 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation  From modified matrix, compute    i is the PageRank of page i.  Query processing

Introduction to Information Retrieval 187 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation  From modified matrix, compute    i is the PageRank of page i.  Query processing  Retrieve pages satisfying the query

Introduction to Information Retrieval 188 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation  From modified matrix, compute    i is the PageRank of page i.  Query processing  Retrieve pages satisfying the query  Rank them by their PageRank

Introduction to Information Retrieval 189 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation  From modified matrix, compute    i is the PageRank of page i.  Query processing  Retrieve pages satisfying the query  Rank them by their PageRank  Return reranked list to the user

Introduction to Information Retrieval 190 PageRank issues

Introduction to Information Retrieval 191 PageRank issues  Real surfers are not random surfers.

Introduction to Information Retrieval 192 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!

Introduction to Information Retrieval 193 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!  → Markov model is not a good model of surfing.

Introduction to Information Retrieval 194 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!  → Markov model is not a good model of surfing.  But it’s good enough as a model for our purposes.

Introduction to Information Retrieval 195 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!  → Markov model is not a good model of surfing.  But it’s good enough as a model for our purposes.  Simple PageRank ranking (as described on previous slide) produces bad results for many pages.

Introduction to Information Retrieval 196 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!  → Markov model is not a good model of surfing.  But it’s good enough as a model for our purposes.  Simple PageRank ranking (as described on previous slide) produces bad results for many pages.  Consider the query [video service].

Introduction to Information Retrieval 197 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!  → Markov model is not a good model of surfing.  But it’s good enough as a model for our purposes.  Simple PageRank ranking (as described on previous slide) produces bad results for many pages.  Consider the query [video service].  The Yahoo home page (i) has a very high PageRank and (ii) contains both video and service.

Introduction to Information Retrieval 198 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!  → Markov model is not a good model of surfing.  But it’s good enough as a model for our purposes.  Simple PageRank ranking (as described on previous slide) produces bad results for many pages.  Consider the query [video service].  The Yahoo home page (i) has a very high PageRank and (ii) contains both video and service.  If we rank all Boolean hits according to PageRank, then the Yahoo home page would be top-ranked.

Introduction to Information Retrieval 199 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, short vs. long paths, bookmarks, directories – and search!  → Markov model is not a good model of surfing.  But it’s good enough as a model for our purposes.  Simple PageRank ranking (as described on previous slide) produces bad results for many pages.  Consider the query [video service].  The Yahoo home page (i) has a very high PageRank and (ii) contains both video and service.  If we rank all Boolean hits according to PageRank, then the Yahoo home page would be top-ranked.  Clearly not desireble.

Introduction to Information Retrieval 200 PageRank issues  In practice: rank according to weighted combination of raw text match, anchor text match, PageRank & other factors.

Introduction to Information Retrieval 201 PageRank issues  In practice: rank according to weighted combination of raw text match, anchor text match, PageRank & other factors. 

Introduction to Information Retrieval Example web graph

Introduction to Information Retrieval 203 Transition (probability) matrix

Introduction to Information Retrieval 204 Transition (probability) matrix d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 d6d6 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval 205 Transition matrix with teleporting

Introduction to Information Retrieval 206 Transition matrix with teleporting d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 d6d6 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval 207 Power method vectors xP k

Introduction to Information Retrieval 208 Power method vectors xP k xxP 1 xP 2 xP 3 xP 4 xP 5 xP 6 xP 7 xP 8 xP 9 xP 10 xP 11 xP 12 xP 13 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval 209 Example web graph PageRank d0d d1d d2d d3d d4d d5d d6d6 0.31

Introduction to Information Retrieval 210 How important is PageRank?

Introduction to Information Retrieval 211 How important is PageRank?  Frequent claim: PageRank is the most important component of web ranking.

Introduction to Information Retrieval 212 How important is PageRank?  Frequent claim: PageRank is the most important component of web ranking.  The reality:

Introduction to Information Retrieval 213 How important is PageRank?  Frequent claim: PageRank is the most important component of web ranking.  The reality:  There are several components that are at least as important: e.g., anchor text, phrases, proximity, tiered indexes...

Introduction to Information Retrieval 214 How important is PageRank?  Frequent claim: PageRank is the most important component of web ranking.  The reality:  There are several components that are at least as important: e.g., anchor text, phrases, proximity, tiered indexes...  Rumor has it that PageRank in his original form (as presented here) now has a negligible impact on ranking!

Introduction to Information Retrieval 215 How important is PageRank?  Frequent claim: PageRank is the most important component of web ranking.  The reality:  There are several components that are at least as important: e.g., anchor text, phrases, proximity, tiered indexes...  Rumor has it that PageRank in his original form (as presented here) now has a negligible impact on ranking!  However, variants of a page’s PageRank are still an essential part of ranking.

Introduction to Information Retrieval 216 How important is PageRank?  Frequent claim: PageRank is the most important component of web ranking.  The reality:  There are several components that are at least as important: e.g., anchor text, phrases, proximity, tiered indexes...  Rumor has it that PageRank in his original form (as presented here) now has a negligible impact on ranking!  However, variants of a page’s PageRank are still an essential part of ranking.  Adressing link spam is difficult and crucial.

Introduction to Information Retrieval 217 Hits – Hyperlink-Induced Topic Search

Introduction to Information Retrieval 218 Hits – Hyperlink-Induced Topic Search  Premise: there are two different types of relevance on the web.

Introduction to Information Retrieval 219 Hits – Hyperlink-Induced Topic Search  Premise: there are two different types of relevance on the web.  Relevance type 1: Hubs. A hub page is a good list of links to pages answering the information need.

Introduction to Information Retrieval 220 Hits – Hyperlink-Induced Topic Search  Premise: there are two different types of relevance on the web.  Relevance type 1: Hubs. A hub page is a good list of links to pages answering the information need.  E.g, for query [chicago bulls]: Bob’s list of recommended resources on the Chicago Bulls sports team

Introduction to Information Retrieval 221 Hits – Hyperlink-Induced Topic Search  Premise: there are two different types of relevance on the web.  Relevance type 1: Hubs. A hub page is a good list of links to pages answering the information need.  E.g, for query [chicago bulls]: Bob’s list of recommended resources on the Chicago Bulls sports team  Relevance type 2: Authorities. An authority page is a direct answer to the information need.

Introduction to Information Retrieval 222 Hits – Hyperlink-Induced Topic Search  Premise: there are two different types of relevance on the web.  Relevance type 1: Hubs. A hub page is a good list of links to pages answering the information need.  E.g, for query [chicago bulls]: Bob’s list of recommended resources on the Chicago Bulls sports team  Relevance type 2: Authorities. An authority page is a direct answer to the information need.  The home page of the Chicago Bulls sports team

Introduction to Information Retrieval 223 Hits – Hyperlink-Induced Topic Search  Premise: there are two different types of relevance on the web.  Relevance type 1: Hubs. A hub page is a good list of links to pages answering the information need.  E.g, for query [chicago bulls]: Bob’s list of recommended resources on the Chicago Bulls sports team  Relevance type 2: Authorities. An authority page is a direct answer to the information need.  The home page of the Chicago Bulls sports team  By definition: Links to authority pages occur repeatedly on hub pages.

Introduction to Information Retrieval 224 Hits – Hyperlink-Induced Topic Search  Premise: there are two different types of relevance on the web.  Relevance type 1: Hubs. A hub page is a good list of links to pages answering the information need.  E.g, for query [chicago bulls]: Bob’s list of recommended resources on the Chicago Bulls sports team  Relevance type 2: Authorities. An authority page is a direct answer to the information need.  The home page of the Chicago Bulls sports team  By definition: Links to authority pages occur repeatedly on hub pages.  Most approaches to search (including PageRank ranking) don’t make the distinction between these two very different types of relevance.

Introduction to Information Retrieval 225 Hubs and authorities : Definition

Introduction to Information Retrieval 226 Hubs and authorities : Definition  A good hub page for a topic links to many authority pages for that topic.

Introduction to Information Retrieval 227 Hubs and authorities : Definition  A good hub page for a topic links to many authority pages for that topic.  A good authority page for a topic is linked to by many hub pages for that topic.

Introduction to Information Retrieval 228 Hubs and authorities : Definition  A good hub page for a topic links to many authority pages for that topic.  A good authority page for a topic is linked to by many hub pages for that topic.  Circular definition – we will turn this into an iterative computation.

Introduction to Information Retrieval 229 Example for hubs and authorities

Introduction to Information Retrieval 230 Example for hubs and authorities

Introduction to Information Retrieval 231 How to compute hub and authority scores

Introduction to Information Retrieval 232 How to compute hub and authority scores  Do a regular web search first

Introduction to Information Retrieval 233 How to compute hub and authority scores  Do a regular web search first  Call the search result the root set

Introduction to Information Retrieval 234 How to compute hub and authority scores  Do a regular web search first  Call the search result the root set  Find all pages that are linked to or link to pages in the root set

Introduction to Information Retrieval 235 How to compute hub and authority scores  Do a regular web search first  Call the search result the root set  Find all pages that are linked to or link to pages in the root set  Call first larger set the base set

Introduction to Information Retrieval 236 How to compute hub and authority scores  Do a regular web search first  Call the search result the root set  Find all pages that are linked to or link to pages in the root set  Call first larger set the base set  Finally, compute hubs and authorities for the base set (which we’ll view as a small web graph)

Introduction to Information Retrieval Root set and base set (1) The root set root set

Introduction to Information Retrieval Root set and base set (1) Nodes that root set nodes link to root set

Introduction to Information Retrieval Root set and base set (1) Nodes that link to root set nodes root set

Introduction to Information Retrieval Root set and base set (1) The base set root set base set

Introduction to Information Retrieval 241 Root set and base set (2)

Introduction to Information Retrieval 242 Root set and base set (2)  Root set typically has nodes.

Introduction to Information Retrieval 243 Root set and base set (2)  Root set typically has nodes.  Base set may have up to 5000 nodes.

Introduction to Information Retrieval 244 Root set and base set (2)  Root set typically has nodes.  Base set may have up to 5000 nodes.  Computation of base set, as shown on previous slide:

Introduction to Information Retrieval 245 Root set and base set (2)  Root set typically has nodes.  Base set may have up to 5000 nodes.  Computation of base set, as shown on previous slide:  Follow outlinks by parsing the pages in the root set

Introduction to Information Retrieval 246 Root set and base set (2)  Root set typically has nodes.  Base set may have up to 5000 nodes.  Computation of base set, as shown on previous slide:  Follow outlinks by parsing the pages in the root set  Find d’s inlinks by searching for all pages containing a link to d

Introduction to Information Retrieval 247 Hub and authority scores

Introduction to Information Retrieval 248 Hub and authority scores  Compute for each page d in the base set a hub score h(d) and an authority score a(d)

Introduction to Information Retrieval 249 Hub and authority scores  Compute for each page d in the base set a hub score h(d) and an authority score a(d)  Initialization: for all d: h(d) = 1, a(d) = 1

Introduction to Information Retrieval 250 Hub and authority scores  Compute for each page d in the base set a hub score h(d) and an authority score a(d)  Initialization: for all d: h(d) = 1, a(d) = 1  Iteratively update all h(d), a(d)

Introduction to Information Retrieval 251 Hub and authority scores  Compute for each page d in the base set a hub score h(d) and an authority score a(d)  Initialization: for all d: h(d) = 1, a(d) = 1  Iteratively update all h(d), a(d)  After convergence:

Introduction to Information Retrieval 252 Hub and authority scores  Compute for each page d in the base set a hub score h(d) and an authority score a(d)  Initialization: for all d: h(d) = 1, a(d) = 1  Iteratively update all h(d), a(d)  After convergence:  Output pages with highest h scores as top hubs

Introduction to Information Retrieval 253 Hub and authority scores  Compute for each page d in the base set a hub score h(d) and an authority score a(d)  Initialization: for all d: h(d) = 1, a(d) = 1  Iteratively update all h(d), a(d)  After convergence:  Output pages with highest h scores as top hubs  Output pages with highest a scores as top authrities

Introduction to Information Retrieval 254 Hub and authority scores  Compute for each page d in the base set a hub score h(d) and an authority score a(d)  Initialization: for all d: h(d) = 1, a(d) = 1  Iteratively update all h(d), a(d)  After convergence:  Output pages with highest h scores as top hubs  Output pages with highest a scores as top authrities  So we output two ranked lists

Introduction to Information Retrieval 255 Iterative update

Introduction to Information Retrieval 256 Iterative update  For all d: h(d) =

Introduction to Information Retrieval 257 Iterative update  For all d: h(d) =  For all d: a(d) =

Introduction to Information Retrieval 258 Iterative update  For all d: h(d) =  For all d: a(d) =  Iterate these two steps until convergence

Introduction to Information Retrieval 259 Details

Introduction to Information Retrieval 260 Details  Scaling

Introduction to Information Retrieval 261 Details  Scaling  To prevent the a() and h() values from getting too big, can scale down after each iteration.

Introduction to Information Retrieval 262 Details  Scaling  To prevent the a() and h() values from getting too big, can scale down after each iteration.  Scaling factor doesn’t really matter.

Introduction to Information Retrieval 263 Details  Scaling  To prevent the a() and h() values from getting too big, can scale down after each iteration.  Scaling factor doesn’t really matter.  We care about the relative (as opposed to absolute) values of the scores.

Introduction to Information Retrieval 264 Details  Scaling  To prevent the a() and h() values from getting too big, can scale down after each iteration.  Scaling factor doesn’t really matter.  We care about the relative (as opposed to absolute) values of the scores.  In most cases, the algorithm converges after a few iterations.

Introduction to Information Retrieval 265 Authorities for query [Chicago Bulls]

Introduction to Information Retrieval 266 Authorities for query [Chicago Bulls] 0.85www.nba.com/bulls 0.25www.essex1.com/people/jmiller/bulls.htm “da Bulls” 0.20www.nando.net/SportServer/basketball/nba/chi.html “The Chicago Bulls” 0.15Users.aol.com/rynocub/bulls.htm “The Chicago Bulls Home Page ” 0.13www.geocities.com/Colosseum/6095 “Chicago Bulls” (Ben Shaul et al, WWW8)

Introduction to Information Retrieval 267 The authority page for [Chicago Bulls]

Introduction to Information Retrieval 268 The authority page for [Chicago Bulls]

Introduction to Information Retrieval 269 Hubs for query [Chicago Bulls]

Introduction to Information Retrieval 270 Hubs for query [Chicago Bulls] 1.62www.geocities.com/Colosseum/1778 “Unbelieveabulls!!!!!” 1.24www.webring.org/cgi-bin/webring?ring=chbulls “Chicago Bulls” 0.74www.geocities.com/Hollywood/Lot/3330/Bulls.html “Chicago Bulls” 0.52www.nobull.net/web_position/kw-search-15-M2.html “Excite Search Results: bulls ” 0.52www.halcyon.com/wordltd/bball/bulls.html “Chicago Bulls Links” (Ben Shaul et al, WWW8)

Introduction to Information Retrieval 271 A hub page for [Chicago Bulls]

Introduction to Information Retrieval 272 A hub page for [Chicago Bulls]

Introduction to Information Retrieval 273 Hub & Authorities: Comments

Introduction to Information Retrieval 274 Hub & Authorities: Comments  HITS can pull together good pages regardless of page content.

Introduction to Information Retrieval 275 Hub & Authorities: Comments  HITS can pull together good pages regardless of page content.  Once the base set is assembles, we only do link analysis, no text matching.

Introduction to Information Retrieval 276 Hub & Authorities: Comments  HITS can pull together good pages regardless of page content.  Once the base set is assembles, we only do link analysis, no text matching.  Pages in the base set often do not contain any of the query words.

Introduction to Information Retrieval 277 Hub & Authorities: Comments  HITS can pull together good pages regardless of page content.  Once the base set is assembles, we only do link analysis, no text matching.  Pages in the base set often do not contain any of the query words.  In theory, an English query can retrieve Japanese-language pages!

Introduction to Information Retrieval 278 Hub & Authorities: Comments  HITS can pull together good pages regardless of page content.  Once the base set is assembles, we only do link analysis, no text matching.  Pages in the base set often do not contain any of the query words.  In theory, an English query can retrieve Japanese-language pages!  If supported by the link structures between English and Japanese pages!

Introduction to Information Retrieval 279 Hub & Authorities: Comments  HITS can pull together good pages regardless of page content.  Once the base set is assembles, we only do link analysis, no text matching.  Pages in the base set often do not contain any of the query words.  In theory, an English query can retrieve Japanese-language pages!  If supported by the link structures between English and Japanese pages!  Danger: topic drift – the pages found by following links may not be related to the original query.

Introduction to Information Retrieval 280 Proof of convergence

Introduction to Information Retrieval 281 Proof of convergence  We define an N×N adjacency matrix A. (We called this the link matrix earlier).

Introduction to Information Retrieval 282 Proof of convergence  We define an N×N adjacency matrix A. (We called this the link matrix earlier).  For 1 ≤ i, j ≤ N, the matrix entry A ij tells us whether there is a link from page i to page j ( A ij = 1) or not (A ij = 0).

Introduction to Information Retrieval 283 Proof of convergence  We define an N×N adjacency matrix A. (We called this the link matrix earlier).  For 1 ≤ i, j ≤ N, the matrix entry A ij tells us whether there is a link from page i to page j ( A ij = 1) or not (A ij = 0).  Example:

Introduction to Information Retrieval 284 Proof of convergence  We define an N×N adjacency matrix A. (We called this the link matrix earlier).  For 1 ≤ i, j ≤ N, the matrix entry A ij tells us whether there is a link from page i to page j ( A ij = 1) or not (A ij = 0).  Example: d1d1 d2d2 d3d3 d1d1 010 d2d2 111 d3d3 100

Introduction to Information Retrieval 285 Write update rules as matrix operations

Introduction to Information Retrieval 286 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.

Introduction to Information Retrieval 287 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.  Similarity for a, the vector of authority scores

Introduction to Information Retrieval 288 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.  Similarity for a, the vector of authority scores  Now we can write h(d) = as a matrix operation: h = Aa...

Introduction to Information Retrieval 289 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.  Similarity for a, the vector of authority scores  Now we can write h(d) = as a matrix operation: h = Aa... ... and we can write a(d) = as a a = A T h

Introduction to Information Retrieval 290 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.  Similarity for a, the vector of authority scores  Now we can write h(d) = as a matrix operation: h = Aa... ... and we can write a(d) = as a a = A T h  HITS algorithm in matrix notation:

Introduction to Information Retrieval 291 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.  Similarity for a, the vector of authority scores  Now we can write h(d) = as a matrix operation: h = Aa... ... and we can write a(d) = as a a = A T h  HITS algorithm in matrix notation:  Compute h = Aa

Introduction to Information Retrieval 292 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.  Similarity for a, the vector of authority scores  Now we can write h(d) = as a matrix operation: h = Aa... ... and we can write a(d) = as a a = A T h  HITS algorithm in matrix notation:  Compute h = Aa  Compute a = A T h

Introduction to Information Retrieval 293 Write update rules as matrix operations  Define the hub vector h = (h 1,..., h N ) as the vector of hub scores. h is the hub score of page d.  Similarity for a, the vector of authority scores  Now we can write h(d) = as a matrix operation: h = Aa... ... and we can write a(d) = as a a = A T h  HITS algorithm in matrix notation:  Compute h = Aa  Compute a = A T h  Iterate until convergence

Introduction to Information Retrieval 294 HITS as eigenvector problem  HITS algorithm in matrix notation. Iterate:  Compute h = Aa  Compute a = A T h

Introduction to Information Retrieval 295 HITS as eigenvector problem  HITS algorithm in matrix notation. Iterate:  Compute h = Aa  Compute a = A T h  By substitution we get: h = AA T h and a = A T Aa

Introduction to Information Retrieval 296 HITS as eigenvector problem  HITS algorithm in matrix notation. Iterate:  Compute h = Aa  Compute a = A T h  By substitution we get: h = AA T h and a = A T Aa  Thus, h is an eigenvector of AA T and a is an eigenvector of A T A.

Introduction to Information Retrieval 297 HITS as eigenvector problem  HITS algorithm in matrix notation. Iterate:  Compute h = Aa  Compute a = A T h  By substitution we get: h = AA T h and a = A T Aa  Thus, h is an eigenvector of AA T and a is an eigenvector of A T A.  So the HITS algorithm is actually a special case of the power method and hub and authority scores are eigenvector values.

Introduction to Information Retrieval 298 HITS as eigenvector problem  HITS algorithm in matrix notation. Iterate:  Compute h = Aa  Compute a = A T h  By substitution we get: h = AA T h and a = A T Aa  Thus, h is an eigenvector of AA T and a is an eigenvector of A T A.  So the HITS algorithm is actually a special case of the power method and hub and authority scores are eigenvector values.  HITS and PageRank both formalize link analysis as eigenvector problems.

Introduction to Information Retrieval Example web graph

Introduction to Information Retrieval Raw matrix A for HITS d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 d6d6 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval Hub vectors h 0, h i = A*a i, i ≥1 h0h0 h1h1 h2h2 h3h3 h4h4 h5h5 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval Authority vector a = A T *h i-1, i ≥ 1 a1a1 a2a2 a3a3 a4a4 a5a5 a6a6 a7a7 d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval 303 Example web graph ah d0d d1d d2d d3d d4d d5d d6d

Introduction to Information Retrieval Top-ranked pages

Introduction to Information Retrieval Top-ranked pages  Pages with highest in-degree: d 2, d 3, d 6

Introduction to Information Retrieval Top-ranked pages  Pages with highest in-degree: d 2, d 3, d 6  Pages with highest out-degree: d 2, d 6

Introduction to Information Retrieval Top-ranked pages  Pages with highest in-degree: d 2, d 3, d 6  Pages with highest out-degree: d 2, d 6  Pages with highest PageRank: d 6

Introduction to Information Retrieval Top-ranked pages  Pages with highest in-degree: d 2, d 3, d 6  Pages with highest out-degree: d 2, d 6  Pages with highest PageRank: d 6  Pages with highest in-degree: d 6 (close: d 2 )

Introduction to Information Retrieval Top-ranked pages  Pages with highest in-degree: d 2, d 3, d 6  Pages with highest out-degree: d 2, d 6  Pages with highest PageRank: d 6  Pages with highest in-degree: d 6 (close: d 2 )  Pages with highest authority score: d 3

Introduction to Information Retrieval 310 PageRank vs. HITS: Discussion

Introduction to Information Retrieval 311 PageRank vs. HITS: Discussion  PageRank can be precomputed, HITS has to be computed at query time.

Introduction to Information Retrieval 312 PageRank vs. HITS: Discussion  PageRank can be precomputed, HITS has to be computed at query time.  HITS is too expensive in most application scenarios.

Introduction to Information Retrieval 313 PageRank vs. HITS: Discussion  PageRank can be precomputed, HITS has to be computed at query time.  HITS is too expensive in most application scenarios.  PageRank and HITS make two different design choices concerning  (i) the eigenproblem formalization  (ii) the set of pages to apply the formalization to.

Introduction to Information Retrieval 314 PageRank vs. HITS: Discussion  PageRank can be precomputed, HITS has to be computed at query time.  HITS is too expensive in most application scenarios.  PageRank and HITS make two different design choices concerning  (i) the eigenproblem formalization  (ii) the set of pages to apply the formalization to.  These two are orthogonal.

Introduction to Information Retrieval 315 PageRank vs. HITS: Discussion  PageRank can be precomputed, HITS has to be computed at query time.  HITS is too expensive in most application scenarios.  PageRank and HITS make two different design choices concerning  (i) the eigenproblem formalization  (ii) the set of pages to apply the formalization to.  These two are orthogonal.  We could also apply HITS to the entire web and PageRank to a small base set.

Introduction to Information Retrieval 316 PageRank vs. HITS: Discussion  PageRank can be precomputed, HITS has to be computed at query time.  HITS is too expensive in most application scenarios.  PageRank and HITS make two different design choices concerning  (i) the eigenproblem formalization  (ii) the set of pages to apply the formalization to.  These two are orthogonal.  We could also apply HITS to the entire web and PageRank to a small base set.  Claim: On the web, a good hub almost always is also a good authority.

Introduction to Information Retrieval 317 PageRank vs. HITS: Discussion  PageRank can be precomputed, HITS has to be computed at query time.  HITS is too expensive in most application scenarios.  PageRank and HITS make two different design choices concerning  (i) the eigenproblem formalization  (ii) the set of pages to apply the formalization to.  These two are orthogonal.  We could also apply HITS to the entire web and PageRank to a small base set.  Claim: On the web, a good hub almost always is also a good authority.  The actual difference between PageRank ranking and HITS ranking is therefore not as large as one might expect.