Intelligent IR on the World Wide Web CSC 575 Intelligent Information Retrieval.

Slides:



Advertisements
Similar presentations
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Advertisements

Matrices, Digraphs, Markov Chains & Their Use by Google Leslie Hogben Iowa State University and American Institute of Mathematics Leslie Hogben Iowa State.
Web Search Spidering.
How PageRank Works Ketan Mayer-Patel University of North Carolina January 31, 2011.
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
“ The Anatomy of a Large-Scale Hypertextual Web Search Engine ” Presented by Ahmed Khaled Al-Shantout ICS
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 March 23, 2005
CSE 522 – Algorithmic and Economic Aspects of the Internet Instructors: Nicole Immorlica Mohammad Mahdian.
1 CS 430 / INFO 430: Information Retrieval Lecture 16 Web Search 2.
Authoritative Sources in a Hyperlinked Environment Hui Han CSE dept, PSU 10/15/01.
The PageRank Citation Ranking “Bringing Order to the Web”
6/16/20151 Recent Results in Automatic Web Resource Discovery Soumen Chakrabartiv Presentation by Cui Tao.
T.Sharon - A.Frank 1 Internet Resources Discovery (IRD) Web IR.
1 Web Search Introduction. 2 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 April 2, 2006
Advances & Link Analysis
Link Analysis, PageRank and Search Engines on the Web
1 Web Search Introduction. 2 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet.
ISP 433/633 Week 7 Web IR. Web is a unique collection Largest repository of data Unedited Can be anything –Information type –Sources Changing –Growing.
Link Structure and Web Mining Shuying Wang
1 Web Crawling and Data Gathering Spidering. 2 Some Typical Tasks Get information from other parts of an organization –It may be easier to get information.
Link Analysis HITS Algorithm PageRank Algorithm.
Overview of Web Data Mining and Applications Part I
Overview of Search Engines
“ The Initiative's focus is to dramatically advance the means to collect,store,and organize information in digital forms,and make it available for searching,retrieval,and.
PRESENTED BY ASHISH CHAWLA AND VINIT ASHER The PageRank Citation Ranking: Bringing Order to the Web Lawrence Page and Sergey Brin, Stanford University.
Λ14 Διαδικτυακά Κοινωνικά Δίκτυα και Μέσα
Presented By: - Chandrika B N
The PageRank Citation Ranking: Bringing Order to the Web Presented by Aishwarya Rengamannan Instructor: Dr. Gautam Das.
1 Page Link Analysis and Anchor Text for Web Search Lecture 9 Many slides based on lectures by Chen Li (UCI) an Raymond Mooney (UTexas)
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
CS315 – Link Analysis Three generations of Search Engines Anchor text Link analysis for ranking Pagerank HITS.
Information Retrieval and Web Search Web search. Spidering Instructor: Rada Mihalcea Class web page: (some of these.
The PageRank Citation Ranking: Bringing Order to the Web Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd Presented by Anca Leuca, Antonis Makropoulos.
Overview of Web Ranking Algorithms: HITS and PageRank
Web Search Algorithms By Matt Richard and Kyle Krueger.
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
Retrieving Information on the Web Presented by Md. Zaheed Iftekhar Course : Information Retrieval (IFT6255) Professor : Jian E. Nie DIRO, University of.
Search Engines1 Searching the Web Web is vast. Information is scattered around and changing fast. Anyone can publish on the web. Two issues web users have.
Link Analysis Rong Jin. Web Structure  Web is a graph Each web site correspond to a node A link from one site to another site forms a directed edge 
Ranking Link-based Ranking (2° generation) Reading 21.
1 University of Qom Information Retrieval Course Web Search (Spidering) Based on:
Information Retrieval and Web Search Web Crawling Instructor: Rada Mihalcea (some of these slides were adapted from Ray Mooney’s IR course at UT Austin)
1 1 COMP5331: Knowledge Discovery and Data Mining Acknowledgement: Slides modified based on the slides provided by Lawrence Page, Sergey Brin, Rajeev Motwani.
What is Web Information retrieval from web Search Engine Web Crawler Web crawler policies Conclusion How does a web crawler work Synchronization Algorithms.
Information Retrieval and Web Search Link analysis Instructor: Rada Mihalcea (Note: This slide set was adapted from an IR course taught by Prof. Chris.
- Murtuza Shareef Authoritative Sources in a Hyperlinked Environment More specifically “Link Analysis” using HITS Algorithm.
1 CS 430: Information Discovery Lecture 5 Ranking.
Web Information Retrieval Textbook by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze Notes Revised by X. Meng for SEU May 2014.
CS 440 Database Management Systems Web Data Management 1.
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
CS 540 Database Management Systems Web Data Management some slides are due to Kevin Chang 1.
1 Web Search Spidering (Crawling)
1 Web Crawling and Data Gathering Spidering. 2 Some Typical Tasks Get information from other parts of an organization –It may be easier to get information.
1 CS 430 / INFO 430: Information Retrieval Lecture 20 Web Search 2.
SEARCH ENGINES & WEB CRAWLER Akshay Ghadge Roll No: 107.
HITS Hypertext-Induced Topic Selection
Text & Web Mining 9/22/2018.
CSE 454 Advanced Internet Systems University of Washington
CSE 454 Advanced Internet Systems University of Washington
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Lecture 22 SVD, Eigenvector, and Web Search
Thanks to Ray Mooney & Scott White
Data Mining Chapter 6 Search Engines
10. IR on the World Wide Web and Link Analysis
12. Web Spidering These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.
Junghoo “John” Cho UCLA
Lecture 22 SVD, Eigenvector, and Web Search
Lecture 22 SVD, Eigenvector, and Web Search
Presentation transcript:

Intelligent IR on the World Wide Web CSC 575 Intelligent Information Retrieval

2 Intelligent IR on the World Wide Web  Web IR versus Classic IR  Web Spiders and Crawlers  Citation/hyperlink Indexing and Analysis  Intelligent Agents for the Web

Intelligent Information Retrieval 3 IR on the Web vs. Classsic IR  Input: publicly accessible Web  Goal: retrieve high quality pages that are relevant to user’s need  static (text, audio, images, etc.)  dynamically generated (mostly database access)  What’s different about the Web:  large volume  distributed data  Heterogeneity of the data  lack of stability  high duplication  high linkage  lack of quality standard

Intelligent Information Retrieval 4 Search Engine Early History  In 1990, Alan Emtage of McGill Univ. developed Archie (short for “archives”)  Assembled lists of files available on many FTP servers.  Allowed regex search of these file names.  In 1993, Veronica and Jughead were developed to search names of text files available through Gopher servers.  In 1993, early Web robots (spiders) were built to collect URL’s:  Wanderer  ALIWEB (Archie-Like Index of the WEB)  WWW Worm (indexed URL’s and titles for regex search)  In 1994, Stanford grad students David Filo and Jerry Yang started manually collecting popular web sites into a topical hierarchy called Yahoo.

Intelligent Information Retrieval 5 Search Engine Early History  In early 1994, Brian Pinkerton developed WebCrawler as a class project at U Wash.  Eventually became part of Excite and AOL  A few months later, Fuzzy Maudlin, a grad student at CMU developed Lycos  First to use a standard IR system  First to index a large set of pages  In late 1995, DEC developed Altavista  Used a large farm of Alpha machines to quickly process large numbers of queries  Supported Boolean operators, phrases in queries.  In 1998, Larry Page and Sergey Brin, Ph.D. students at Stanford, started Google  Main advance was use of link analysis to rank results partially based on authority.

Intelligent Information Retrieval 6 Web Search Query String IR System Ranked Documents 1. Page1 2. Page2 3. Page3. Document corpus Web Spider

Intelligent Information Retrieval 7 Spiders (Robots/Bots/Crawlers)  Start with a comprehensive set of root URL’s from which to start the search.  Follow all links on these pages recursively to find additional pages.  Index all novel found pages in an inverted index as they are encountered.  May allow users to directly submit pages to be indexed (and crawled from).

Intelligent Information Retrieval 8 Search Strategy Trade-Off’s  Breadth-first search strategy explores uniformly outward from the root page but requires memory of all nodes on the previous level (exponential in depth). Standard spidering method.  Depth-first search requires memory of only depth times branching-factor (linear in depth) but gets “lost” pursuing a single thread.  Both strategies implementable using a queue of links (URL’s).

Intelligent Information Retrieval 9 Avoiding Page Duplication  Must detect when revisiting a page that has already been spidered (web is a graph not a tree).  Must efficiently index visited pages to allow rapid recognition test.  Tree indexing (e.g. trie)  Hashtable  Index page using URL as a key.  Must canonicalize URL’s (e.g. delete ending “/”)  Not detect duplicated or mirrored pages.  Index page using textual content as a key.  Requires first downloading page.

Intelligent Information Retrieval 10 Spidering Algorithm Initialize queue (Q) with initial set of known URL’s. Until Q empty or page or time limit exhausted: Pop URL, L, from front of Q. If L is not an HTML page (.gif,.jpeg,.ps,.pdf,.ppt…) continue loop. If already visited L, continue loop. Download page, P, for L. If cannot download P (e.g. 404 error, robot excluded) continue loop. Index P (e.g. add to inverted index or store cached copy). Parse P to obtain list of new links N. Append N to the end of Q.

Intelligent Information Retrieval 11 Queueing Strategy  How new links added to the queue determines search strategy.  FIFO (append to end of Q)  gives breadth-first search.  LIFO (add to front of Q)  gives depth-first search.  Heuristically ordering the Q gives a “focused crawler” that directs its search towards “interesting” pages.  May be able to use standard AI search algorithms such as Best-first search, A*, etc.

Intelligent Information Retrieval 12 Restricting Spidering  Restrict spider to a particular site.  Remove links to other sites from Q.  Restrict spider to a particular directory.  Remove links not in the specified directory.  Obey page-owner restrictions  robot exclusion protocol

Intelligent Information Retrieval 13 Anchor Text Indexing  Extract anchor text (between and ) of each link:  Anchor text is usually descriptive of the document to which it points.  Add anchor text to the content of the destination page to provide additional relevant keyword indices.  Used by Google:  Evil Empire  IBM  Helps when descriptive text in destination page is embedded in image logos rather than in accessible text.  Many times anchor text is not useful:  “click here”  Increases content more for popular pages with many in- coming links, increasing recall of these pages.  May even give higher weights to tokens from anchor text.

Intelligent Information Retrieval 14 Multi-Threaded Spidering  Bottleneck is network delay in downloading individual pages.  Best to have multiple threads running in parallel each requesting a page from a different host.  Distribute URL’s to threads to guarantee equitable distribution of requests across different hosts to maximize through-put and avoid overloading any single server.  Early Google spider had multiple coordinated crawlers with about 300 threads each, together able to download over 100 pages per second.

Intelligent Information Retrieval 15 Directed/Focused Spidering  Sort queue to explore more “interesting” pages first.  Two styles of focus:  Topic-Directed  Link-Directed

Intelligent Information Retrieval 16 Topic-Directed Spidering  Assume desired topic description or sample pages of interest are given.  Sort queue of links by the similarity (e.g. cosine metric) of their source pages and/or anchor text to this topic description.  Preferentially explores pages related to a specific topic.

Intelligent Information Retrieval 17 Link-Directed Spidering  Monitor links and keep track of in-degree and out- degree of each page encountered.  Sort queue to prefer popular pages with many in- coming links (authorities).  Sort queue to prefer summary pages with many out- going links (hubs).

Intelligent Information Retrieval 18 Keeping Spidered Pages Up to Date  Web is very dynamic: many new pages, updated pages, deleted pages, etc.  Periodically check spidered pages for updates and deletions:  Just look at header info (e.g. META tags on last update) to determine if page has changed, only reload entire page if needed.  Track how often each page is updated and preferentially return to pages which are historically more dynamic.  Preferentially update pages that are accessed more often to optimize freshness of more popular pages.

Intelligent Information Retrieval 19 Quality and the WWW The Case for Connectivity Analysis  Basic Idea: mine hyperlink information on the Web  Assumptions:  links often connect related pages  a link between pages is a “recommendation”  Approaches  classic IR: co-citation analysis (a.k.a. “bibliometrics”)  connectivity-based ranking (e.g., GOOGLE)  HITS - hypertext induced topic search

Intelligent Information Retrieval 20 Co-Citation Analysis  Has been around since the 50’s (Small, Garfield, White & McCain)  Used to identify core sets of  authors, journals, articles for particular fields of study  Main Idea:  Find pairs of papers that cite third papers  Look for commonalities 

Intelligent Information Retrieval 21 Co-citation analysis (From Garfield 98) The Global Map of Science, based on co- citation clustering: Size of the circle represents number of papers published in the area; Distance between circles represents the level of co-citation between the fields; By zooming in, deeper levels in the hierarchy can be exposed. The Global Map of Science, based on co- citation clustering: Size of the circle represents number of papers published in the area; Distance between circles represents the level of co-citation between the fields; By zooming in, deeper levels in the hierarchy can be exposed.

Intelligent Information Retrieval 22 Co-citation analysis (From Garfield 98) Zooming in on biomedicine, specialties including cardiology, immunology, etc., can be viewed.

Intelligent Information Retrieval 23 Co-citation analysis (From Garfield 98)

Intelligent Information Retrieval 24 CiteSeer: A Web Agent for Citation Analysis (Bollacker, Lawrence, Giles ) The CiteSeer agent consists of three main components: (i) sub-agent to automatically locate and acquire publications, (ii) document parser and database creator, (iii) browser interface which supports search by keyword and browsing by citation links.

CiteSeer: A Web Agent for Citation Analysis

Intelligent Information Retrieval 27 Citations vs. Links  Web links are a bit different than citations:  Many links are navigational.  Many pages with high in-degree are portals not content providers.  Not all links are endorsements.  Company websites don’t point to their competitors.  Citations to relevant literature is enforced by peer-review.  Authorities  pages that are recognized as providing significant, trustworthy, and useful information on a topic.  In-degree (number of pointers to a page) is one simple measure of authority.  However in-degree treats all links as equal. Should links from pages that are themselves authoritative count more?  Hubs  index pages that provide lots of useful links to relevant content pages (topic authorities).

Intelligent Information Retrieval 28 Hypertext Induced Topic Search  Basic Idea: look for “authority” and “hub” web pages (Kleinberg 98)  authority: definitive content for a topic  hub: index links to good content  The two distinctions tend to blend  Procedure:  Issue a query on a term, e.g. “java”  Get back a set of documents  Look at the inlink and outlink patterns for the set of retrieved documents  Perform statistical analysis to see which patterns are the most dominant ones  Technique was initially used in IBM’s CLEVER system  can find some good starting points for some topics  doesn’t solve the whole search problem!  doesn’t make explicit use of content (so may result in “topic drift” from original query)

Intelligent Information Retrieval 29 Hypertext Induced Topic Search  Intuition behind the HITS algorithm  Authority comes from in-edges  Being a good hub comes from out-edges  Mutually re-enforcing relationship  Better authority comes from in-edges of good hubs  Being a better hub comes from out-edges of to good authorities HubsAuthorities A good authority is a page that is pointed to by many good hubs. A good hub is a page that points to many good authorities. Together they tend to form a bipartite graph A good authority is a page that is pointed to by many good hubs. A good hub is a page that points to many good authorities. Together they tend to form a bipartite graph

Intelligent Information Retrieval 30 HITS Algorithm  Computes hubs and authorities for a particular topic specified by a normal query.  1. First determine a set of relevant pages for the query called the base set (base subgraph) S.  For a specific query Q, let the set of documents returned by a standard search engine be called the root set R. Initialize S to R.  Add to S all pages pointed to by any page in R.  Add to S all pages that point to any page in R.  Analyze the link structure of the web subgraph defined by S to find authority and hub pages in this set. R S

Intelligent Information Retrieval 31 HITS – Some Considerations  Base Limitations  To limit computational expense:  Limit number of root pages to the top 200 pages retrieved for the query.  Limit number of “back-pointer” pages to a random set of at most 50 pages returned by a “reverse link” query.  To eliminate purely navigational links:  Eliminate links between two pages on the same host.  To eliminate “non-authority-conveying” links:  Allow only m (m  4  8) pages from a given host as pointers to any individual page.  Authorities and In-Degree  Even within the base set S for a given query, the nodes with highest in- degree are not necessarily authorities (may just be generally popular pages like Yahoo or Amazon).  True authority pages are pointed to by a number of hubs (i.e. pages that point to lots of authorities).

Intelligent Information Retrieval 32 HITS: Iterative Algorithm  Use an iterative algorithm to slowly converge on a mutually reinforcing set of hubs and authorities.  Maintain for each page p  S:  Authority score: a p (vector a)  Hub score: h p (vector h)  Initialize all a p = h p = 1  Maintain normalized scores:  Authorities are pointed to by lots of good hubs:  Hubs point to lots of good authorities:

Intelligent Information Retrieval 33 Illustrated Update Rules 2 3 a 4 = h 1 + h 2 + h h 4 = a 5 + a 6 + a 7

Intelligent Information Retrieval 34 HITS Iterative Algorithm Initialize for all p  S: a p = h p = 1 For i = 1 to k: For all p  S: (update auth. scores) For all p  S: (update hub scores) For all p  S: a p = a p /c c: For all p  S: h p = h p /c c: (normalize a) (normalize h)

Intelligent Information Retrieval 35 HITS Example D A B C E D A C B E A: [0.0, 0.0, 2.0, 2.0, 1.0] D A C B E H: [4.0, 5.0, 0.0, 0.0, 0.0] D A C B E Norm A: [0.0, 0.0, 0.67, , 0.33] D A C B E Norm H: [0.62, 0.78, 0.0, 0.0, 0.0] First Iteration Normalize: divide each vector by its norm (square root of the sum of the squares)

Intelligent Information Retrieval 36 HITS Algorithm  Let HUB[v] and AUTH[v] represent the hub and authority values associated with a vertex v  Repeat until HUB and AUTH vectors converge  Normalize HUB and AUTH  HUB[v] :=  AUTH[u i ] for all u i with Edge(v, u i )  AUTH[v] :=  HUB[w i ] for all u i with Edge(w i, v) AH v u1u1 u2u2 ukuk... w1w1 w2w2 wkwk

Intelligent Information Retrieval 37 Convergence  Algorithm converges to a fix-point if iterated indefinitely.  Define A to be the adjacency matrix for the subgraph defined by S.  A ij = 1 for i  S, j  S iff i  j  Authority vector, a, converges to the principal eigenvector of A T A  Hub vector, h, converges to the principal eigenvector of AA T  In practice, 20 iterations produces fairly stable results.

Intelligent Information Retrieval 38 HITS Results  Authorities for query: “Java”  java.sun.com  comp.lang.java FAQ  Authorities for query “search engine”  Yahoo.com  Excite.com  Lycos.com  Altavista.com  Authorities for query “Gates”  Microsoft.com  roadahead.com In most cases, the final authorities were not in the initial root set generated using Altavista. Authorities were brought in from linked and reverse-linked pages and then HITS computed their high authority score.

Intelligent Information Retrieval 39 HITS: Other Applications  Finding Similar Pages Using Link Structure  Given a page, P, let R (the root set) be t (e.g. 200) pages that point to P.  Grow a base set S from R.  Run HITS on S.  Return the best authorities in S as the best similar-pages for P.  Finds authorities in the “link neighbor-hood” of P. Similar Pages to “honda.com”: - toyota.com - ford.com - bmwusa.com - saturncars.com - nissanmotors.com - audi.com - volvocars.com

Intelligent Information Retrieval 40 HITS: Other Applications  HITS for Clustering  An ambiguous query can result in the principal eigenvector only covering one of the possible meanings.  Non-principal eigenvectors may contain hubs & authorities for other meanings.  Example: “jaguar”:  Atari video game (principal eigenvector)  NFL Football team (2 nd non-princ. eigenvector)  Automobile (3 rd non-princ. eigenvector)  An application of Principle Component Analysis (PCA)

Intelligent Information Retrieval 41 HITS: Problems and Solutions  Some edges are wrong (not “recommendations”)  multiple edges from the same author  automatically generated  spam Solution: weight edges to limit influence  Topic Drift  Query: jaguar AND cars  Result: pages about cars in general Solution: analyze content and assign topic scores to nodes

Intelligent Information Retrieval 42 Modified HITS Algorithm  Let HUB[v] and AUTH[v] represent the hub and authority values associated with a vertex v  Repeat until HUB and AUTH vectors converge  Normalize HUB and AUTH  HUB[v] :=  AUTH[u i ]. TopicScore[u i ]. Weight(v, u i ) for all u i with Edge(v, u i )  AUTH[v] :=  HUB[w i ]. TopicScore[w i ]. Weight(w i, v) for all u i with Edge(w i, v)  Topic score is determined based on similarity measure between the query and the documents

Intelligent Information Retrieval 43 PageRank  Alternative link-analysis method used by Google (Brin & Page, 1998).  Does not attempt to capture the distinction between hubs and authorities.  Ranks pages just by authority.  Applied to the entire Web rather than a local neighborhood of pages surrounding the results of a query.

Intelligent Information Retrieval 44 Initial PageRank Idea  Just measuring in-degree (citation count) doesn’t account for the authority of the source of a link.  Initial page rank equation for page p:  N q is the total number of out-links from page q.  A page, q, “gives” an equal fraction of its authority to all the pages it points to (e.g. p).  c is a normalizing constant set so that the rank of all pages always sums to 1.

Intelligent Information Retrieval 45 Initial PageRank Idea  Can view it as a process of PageRank “flowing” from pages to the pages they cite

Intelligent Information Retrieval 46 Initial PageRank Algorithm  Iterate rank-flowing process until convergence: Let S be the total set of pages. Initialize  p  S: R(p) = 1/|S| Until ranks do not change (much) (convergence) For each p  S: For each p  S: R(p) = cR´(p) (normalize)

Intelligent Information Retrieval 47 Sample Stable Fixpoint

Intelligent Information Retrieval 48 Linear Algebra Version  Treat R as a vector over web pages.  Let A be a 2-d matrix over pages where  A vu = 1/N u if u  v else A vu = 0  Then R = cAR  R converges to the principal eigenvector of A.

Intelligent Information Retrieval 49 Problem with Initial Idea  A group of pages that only point to themselves but are pointed to by other pages act as a “rank sink” and absorb all the rank in the system.  Solutions: Rank Score  Introduce a “rank source” E that continually replenishes the rank of each page, p, by a fixed amount E(p).

Intelligent Information Retrieval 50 PageRank Algorithm Let S be the total set of pages. Let  p  S: E(p) =  /|S| (for some 0<  <1, e.g ) Initialize  p  S: R(p) = 1/|S| Until ranks do not change (much) ( convergence ) For each p  S: For each p  S: R(p) = cR´(p) (normalize)

Intelligent Information Retrieval PageRank Example A B C  = 0.3 A C B Initial R: [0.33, 0.33, 0.33] R’(C): R(A)/2 + R(B)/ /3 R’(B): R(A)/ /3 R’(A): 0.3/3 A C B R’: [0.1, 0.595, 0.27] A C B R: [0.104, 0.617, 0.28] Normalization factor: 1/[R’(A)+R’(B)+R’(C)] = 1/0.965 First Iteration Only: before normalization: after normalization:

Intelligent Information Retrieval 52 Random Surfer Model  PageRank can be seen as modeling a “random surfer” that starts on a random page and then at each point:  With probability E(p) randomly jumps to page p.  Otherwise, randomly follows a link on the current page.  R(p) models the probability that this random surfer will be on page p at any given time.  “E jumps” are needed to prevent the random surfer from getting “trapped” in web sinks with no outgoing links.

Intelligent Information Retrieval 53 Speed of Convergence  Early experiments on Google used 322 million links.  PageRank algorithm converged (within small tolerance) in about 52 iterations.  Number of iterations required for convergence is empirically O(log n) (where n is the number of links).  Therefore calculation is quite efficient.

Intelligent Information Retrieval 54 Google Ranking  Complete Google ranking includes (based on university publications prior to commercialization).  Vector-space similarity component.  Keyword proximity component.  HTML-tag weight component (e.g. title preference).  PageRank component.  Details of current commercial ranking functions are trade secrets.

Intelligent Information Retrieval 55 Personalized PageRank  PageRank can be biased (personalized) by changing E to a non-uniform distribution.  Restrict “random jumps” to a set of specified relevant pages.  For example, let E(p) = 0 except for one’s own home page, for which E(p) =   This results in a bias towards pages that are closer in the web graph to your own homepage.  Similar personalization can be achieved by setting E(p) for only pages p that are part of the user’s profile.

Intelligent Information Retrieval 56 PageRank-Biased Spidering  Use PageRank to direct (focus) a spider on “important” pages.  Compute page-rank using the current set of crawled pages.  Order the spider’s search queue based on current estimated PageRank.

Intelligent Information Retrieval 57 Link Analysis Conclusions  Link analysis uses information about the structure of the web graph to aid search.  It is one of the major innovations in web search.  It is the primary reason for Google’s success.

Intelligent Information Retrieval 58 Behavior-Based Ranking  Emergence of large-scale search engines allow for mining aggregate behavior analysis to improving ranking.  Basic Idea:  For each query Q, keep track of which docs in the results are clicked on  On subsequent requests for Q, re-order docs in results based on click-throughs.  Relevance assessment based on  Behavior/usage  vs. content

Intelligent Information Retrieval 59 Query-doc popularity matrix B Queries Docs q j B qj = number of times doc j clicked-through on query q When query q issued again, order docs by B qj values.

Intelligent Information Retrieval 60 Vector space implementation  Maintain a term-doc popularity matrix C  as opposed to query-doc popularity  initialized to all zeros  Each column represents a doc j  If doc j clicked on for query q, update C j  C j +  q (here q is viewed as a vector).  On a query q’, compute its cosine proximity to C j for all j.  Combine this with the regular text score.

Intelligent Information Retrieval 61 Issues  Normalization of C j after updating  Assumption of query compositionality  “white house” document popularity derived from “white” and “house”  Updating - live or batch?  Basic assumption:  Relevance can be directly measured by number of click throughs  Valid?