Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hyperlink Analysis for the Web. Information Retrieval Input: Document collection Goal: Retrieve documents or text with information content that is relevant.

Similar presentations


Presentation on theme: "Hyperlink Analysis for the Web. Information Retrieval Input: Document collection Goal: Retrieve documents or text with information content that is relevant."— Presentation transcript:

1 Hyperlink Analysis for the Web

2 Information Retrieval Input: Document collection Goal: Retrieve documents or text with information content that is relevant to user’s information need Two aspects: 1. Processing the collection 2. Processing queries (searching)

3 Classic information retrieval Ranking is a function of query term frequency within the document (tf) and across all documents (idf) This works because of the following assumptions in classical IR: –Queries are long and well specified “What is the impact of the Falklands war on Anglo-Argentinean relations” –Documents (e.g., newspaper articles) are coherent, well authored, and are usually about one topic –The vocabulary is small and relatively well understood

4 Web information retrieval None of these assumptions hold: –Queries are short: 2.35 terms in avg –Huge variety in documents: language, quality, duplication –Huge vocabulary: 100s million of terms –Deliberate misinformation Ranking is a function of the query terms and of the hyperlink structure

5 Connectivity-based ranking Ranking Returned Documents –Query dependent rakingQuery dependent raking –Query independent rankingQuery independent ranking Hyperlink analysis – Idea: Mine structure of the web graph – Each web page is a node – Each hyperlink is a directed edge

6 Query dependent query Assigns a score that measures the quality and relevance of a selected set of pages to a given user query. The basic idea is to build a query-specific graph, called a neighborhood graph, and perform hyperlink analysis on it.

7 Building a neighborhood graph A start set of documents matching the query is fetched from a search engine (say, the top 200 matches). The start set is augmented by its neighborhood, which is the set of documents that either hyperlinks to or is hyperlinked to by documents in the start set. –Since the indegree of nodes can be very large, in practice a limited number of these documents (say, 50) is included. Each document in both the start set and the neighborhood is modeled by a node. There exists an edge from node A to node B if and only if document A hyperlinks to document B. –Hyperlinks between pages on the same Web host can be omitted.

8 Query Results = Start Set Forward Set Back Set Neighborhood graph An edge for each hyperlink, but no edges within the same host Result 1 Result 2 Result n f1f1 f2f2 fsfs... b1b1 b2b2 bmbm … Subgraph associated to each query

9 HITS [K’98] Goal: Given a query find: –Good sources of content (authorities) –Good sources of links (hubs)

10 Authority comes from in-edges. Being a good hub comes from out-edges. Better authority comes from in-edges from good hubs. Being a better hub comes from out-edges to good authorities. Intuition

11 q1q1 qkqk... A q2q2 r1r1 rkrk r2r2... H p

12 HITS details

13 HITS Kleinberg proved that the H and A vectors will eventually converge, i.e., that termination is guaranteed. –In practice we found the vectors to converge in about 10 iterations. Documents are ranked by hub and authority scores respectively. The algorithm does not claim to find all relevant pages, since there may be some that have good content but have not been linked to by many authors.

14 Problems with the HITS algorithm(1) Only a relatively small part of the Web graph is considered, adding edges to a few nodes can change the resulting hubs and authority scores considerably. –It is relatively easy to manipulate these scores.

15 Problems with the HITS algorithm(2) We often find that the neighborhood graph contains documents not relevant to the query topic. If these nodes are well connected, the topic drift problem arises. –The most highly ranked authorities and hubs tend not to be about the original topic. –For example, when running the algorithm on the query “jaguar and car" the computation drifted to the general topic “car" and returned the home pages of different car manufacturers as top authorities, and lists of car manufacturers as the best hubs.

16 Improvements To avoid “undue weight” of the opinion of a single person –All the documents on a single host have the same influence on the document they are connected to as a single document would. Ideas –If there are k edges from documents on a first host to a single document on a second host, we give each edge an authority weight of 1/k. –If there are l edges from a single document on a first host to a set of documents on a second host, we give each edge a hub weight of 1/l.

17 Improvements

18 To solve topic drift problem, content analysis can be used. Ideas –Eliminating non-relevant nodes from the graph –Regulating the influence of a node based on its relevance.

19 Improvements Computing Relevance Weights for Nodes –The documents in the start set is used to define a broader query and match every document in the graph against this query. –Specifically, the concatenation of the first 1000 words from each document are considered to be the query, Q and compute similarity(Q;D) All nodes whose weights are below a threshold are pruned.

20 Improvements Regulating the Influence of a Node –Let W[n] be the relevance weight of a node n –W[n]* A[n] is used instead of A[n] for computing the hub scores. –W[n]*H[n] is used instead of H[n] for computing the authority score. This reduces the influence of less relevant nodes on the scores of their neighbors.

21 Google’s approach Assumption: A link from page A to page B is a recommendation of page B by the author of A (we say B is successor of A)  Quality of a page is related to its in-degree Recursion: Quality of a page is related to – its in-degree, and to – the quality of pages linking to it  PageRank [BP ‘98]

22 Definition of PageRank Consider the following infinite random walk (surf): –Initially the surfer is at a random page –At each step, the surfer proceeds to a randomly chosen web page with probability d to a randomly chosen successor of the current page with probability 1-d The PageRank of a page p is the fraction of steps the surfer spends at p in the limit.

23 PageRank (cont.) By random walk theorem: PageRank = stationary probability for this Markov chain, i.e. where n is the total number of nodes in the graph

24 PageRank (cont.) P A B PageRank of P is (1-d)  (  1/4 th the PageRank of A + 1/3 rd the PageRank of B ) +d/n d d

25 PageRank Used in Google’s ranking function Query-independent Summarizes the “web opinion” of the page importance

26 PageRank vs. HITS Computation: –Once for all documents and queries (offline) Query-independent – requires combination with query-dependent criteria Hard to spam Computation: –Requires computation for each query Query-dependent Relatively easy to spam Quality depends on quality of start set

27 We want top-ranking documents to be both relevant and authoritative

28 Relevance is being modeled by cosine scores Authority is typically a query-independent property of a document –Assign to each document a query-independent quality score in [0,1] to each document d Denote this by g(d)

29 Net score Consider a simple total score combining cosine relevance and authority net-score(q,d) = g(d) + cosine(q,d) –Can use some other linear combination than an equal weighting Now we seek the top K docs by net score

30 Top K by net score – fast methods First idea: Order all postings by g(d) Key: this is a common ordering for all postings Thus, can concurrently traverse query terms’ postings for –Postings intersection –Cosine score computation

31 Why order postings by g(d)? Under g(d)-ordering, top-scoring docs likely to appear early in postings traversal In time-bound applications (say, we have to return whatever search results we can in 50 ms), this allows us to stop postings traversal early –Short of computing scores for all docs in postings

32 Champion lists in g(d)-ordering Can combine champion lists with g(d)- ordering Maintain for each term a champion list of the r docs with highest g(d) + tf-idf td Seek top-K results from only the docs in these champion lists

33 High and low lists For each term, we maintain two postings lists called high and low –Think of high as the champion list When traversing postings on a query, only traverse high lists first –If we get more than K docs, select the top K and stop –Else proceed to get docs from the low lists Can be used even for simple cosine scores, without global quality g(d) A means for segmenting index into two tiers

34 Impact-ordered postings We only want to compute scores for docs for which wf t,d is high enough We sort each postings list by wf t,d Now: not all postings in a common order! How do we compute scores in order to pick off top K? –Two ideas follow

35 1. Early termination When traversing t’s postings, stop early after either –a fixed number of r docs –wf t,d drops below some threshold Take the union of the resulting sets of docs –One from the postings of each query term Compute only the scores for docs in this union

36 2. idf-ordered terms When considering the postings of query terms Look at them in order of decreasing idf –High idf terms likely to contribute most to score As we update score contribution from each query term –Stop if doc scores relatively unchanged Can apply to cosine or some other net scores

37 Other applications Web Pages Collection –The crawling process usually starts from a set of source Web pages. The Web crawler follows the source page hyperlinks to find more Web pages. –This process is repeated on each new set of pages and continues until no more new pages are discovered or until a predetermined number of pages have been collected. –The crawler has to decide in which order to collect hyperlinked pages that have not yet been crawled. –The crawlers of different search engines make different decisions, and so collect different sets of Web documents. A crawler might try to preferentially crawl “high quality” Web pages.

38 Other applications Web Page Categorization Geographical Scope –Whether a given Web page is of interest only for people in a given region or is of nation- or worldwide interest is an interesting problem for hyperlink analysis. For example, a weather-forecasting page is interesting only to the region it covers, while the Internal Revenue Service Web page may be of interest to U.S. taxpayers throughout the world. –A page’s hyperlink structure also reflects its range of interest. Local pages are mostly hyperlinked to by pages from the same region, while hyperlinks to pages of nationwide interest are roughly uniform throughout the country. –This information lets search engines tailor query results to the region the user is in.

39 Reference Monika Henzinger, “hyperlink analysis for the web”, IEEE internet computing 2001. J. Cho, H. García-Molina, and L. Page, “Efficient Crawling through URL Ordering,” Proc. Seventh Int’l World Wide Web Conf., Elsevier Science, New York, 1998. S. Chakrabarti et al., “Automatic Resource Compilation by Analyzing Hyperlink Structure and Associated Text,” Proc. Seventh Int’l World Wide Web Conf., Elsevier Science, New York, 1998. K. Bharat and M. Henzinger, “Improved Algorithms for Topic Distillation in Hyperlinked Environments,” Proc. 21 st Int’l ACM SIGIR Conf. Research and Development in Information Retrieval (SIGIR 98), ACM Press, New York, 1998 L. Page et al., “The PageRank Citation Ranking: Bringing Order to the Web,” Stanford Digital Library Technologies, Working Paper 1999-0120, Stanford Univ., Palo Alto, Calif., 1998. I. Varlamis et al., “THESUS, a Closer View on Web Content Management Enhanced with Link Semantics”, IEEE Transactions on Knowledge and Data Engineering, vol. 16, No. 6, June 2004.

40 Appendix

41 Random Walks Random Walk = discrete-time stochastic process over a graph G=(V,E) with a transition probability matrix P –Random Walk is at one node at any time, making node-transitions at time steps t=1,2, … with P ij being the probability of going to node j when at node i –Initial node chosen according to some probability distribution q (0) over S

42 Random Walks (cont.) q (t) = row vector whose i-th component is the probability that the chain is in node i at time t q (t+1) = q (t) P => q (t) = q (0) P t A stationary distribution is a probability distribution q such that q = q P (steady-state behavior) Example: –P ij = 1/degree(i) if (i,j) in G and 0 otherwise, then q i = degree(i)/2m

43 Random Walks (cont.) Theorem: Under certain conditions: –There exists a unique stationary distribution q with q i > 0 for all i –Let N(i,t) be the number of times the random walk visits node i in t steps. Then, the fraction of steps the walk spends at i equals q i, i.e.


Download ppt "Hyperlink Analysis for the Web. Information Retrieval Input: Document collection Goal: Retrieve documents or text with information content that is relevant."

Similar presentations


Ads by Google