Presentation is loading. Please wait.

Presentation is loading. Please wait.

PageRank Algorithm -- Bringing Order to the Web (Hu Bin)

Similar presentations


Presentation on theme: "PageRank Algorithm -- Bringing Order to the Web (Hu Bin)"— Presentation transcript:

1 PageRank Algorithm -- Bringing Order to the Web (Hu Bin)

2 Earlier Search Engines Search by keywords, return all pages that have those words If thousands pages be return, which one first? -- return those pages with most frequencies of those words first Problem -- If people want their pages to be put top on a word search (e.g., “database”), they just need repeat the word many many times Search engines can be easily “fooled”

3 Link Analysis The goal is to rank pages Intuition  recommendations -- The importance of each page should be decided by what other pages “say” about this page -- A “link” from page A to page B is a recommendation of page B by the author of A. This implies that we need mine structure of the web graph Two main approaches (1) Static: use the links in all pages to calculate a ranking of every page (Google) (2) Dynamic: search by keywords first, use the links in the results to dynamically determine a ranking (IBM Clever – Hubs and Authorities)

4 Static --- Google’s Approach PageRank is defined by founder of google  outDegree(B) = # edges leaving page B = # hyperlinks in page B  means that Page B equally distributes its rank over all the pages it points to  d is a damping factor which is set between 0 and 1. Normally set to 0.85. Example: If R(B) = 3 and R(C) = 4 By graph, we have outD (B)= 1 and outD (C) = 2 Therefore, we can calculate R(A) = 0.85 * (3/1 + 4/2) + (1- 0.85) = 4.4 Problem: How we get this? To get R(B), we need know R(A) first, To get R(A), we need know R(B) first. Where START? A B C

5 Matrix Formulation PageRanks’ Matrix Formulation – for one page R 1 = (1- d ) + d * (α 11 * R 1 /outD 1 + α 12 * R 2 /outD 2 + … + α 1n * R n /outD n ) – for all pages --------- (1)  α ij = 0, if there is no link to page “i” in page “j”; otherwise, α ij = 1 How to calculate it?

6 Calculate PageRanks’ Matrix The Formulation (1) can be written as: R = C + M * R --------- (2) Can we solve it like this? (I – M) * R = C R = (I – M) -1 * C Be careful! It is a matrix equation. -- The Determinants of (I – M) may be zero; In other word, (I – M) may be a singular matrix. Therefore, (I – M) -1 may NOT exist! Is (I – M) a singular matrix? -- NO? How to prove? -- Yes? How solve equation (2)

7 Solve the matrix Formulation We can NOT prove whether (I – M) is a singular matrix or not. (In fact, it can be a singular matrix in real world) We solve it by using recursive way -- R n+1 = C + M * R n --------- (2) -- We assume an initial value as R 0 to calculate R 1, and use R 1 to get R 2, and so on. -- After many many times, what happen? See example

8 Example of PageRank At first, we assume the rank of every page is 1. After many times, the value will not change. -- Why? the iteration converge If initial value different, the solution change? -- We will get the same solution whatever initial value is taken! -- Why? the iteration converge Ne Am MS

9 Convergence It means -- whatever values we start at -- after running a number of times, we will end up with the same final values -- these values will no longer change even we do further iterations of calculation The reason that R n+1 = C + M * R n converges is: -- Markov chain theorem

10 Markov chain theorem (I) Surfing the web  theoretical random walk -- From page A to page B by randomly choosing an outgoing link in page A -- By this way, it can lead to (1) dead ends at pages with no outgoing link (2) cycles around cliques of interconnected pages The theoretical random walk is know as a Markov chain or Markov process.

11 Markov chain theorem (II) Markov chain R n+1 = M * R n has a unique stationary distribution under three conditions: (1) M is stochastic (2) M is aperiodic (3) M is irreducible To make these condition true: (1) All columns of M add up to 1 and no value is negative (2) Make sure that G is not bipartite (3) Make sure that G is strongly connected (Note: G is the graph which M corresponds to.) (Note: proved by G. R. Grimmett and D. R. Stirzaker. Probability and Random Processes)

12 PageRank and Markov chain ---- condition (1) M is stochastic  All columns of M add up to 1 and no value is negative What M is? -- α ij = 0, if there is no link to page “i” in page “j”; otherwise, α ij = 1 -- outD j = total # edges leaving page j = total # hyperlinks in page j Therefore, we have

13 PageRank and Markov chain ---- condition (2) M is aperiodic  G is not bipartite However sometimes G is bipartite. A B C D E F G

14 PageRank and Markov chain ---- condition (3) M is irreducible  G is strongly connected Definition of strongly connected Graph: -- A strongly connected digraph is a directed graph in which it is possible to reach any node starting from any other node by traversing edges in the direction(s) in which they point. (by http://mathworld.wolfram.com/StronglyConnectedDigraph.html) However G is not always strongly connected Two problems: (1) rank leak (2) rank sink (1) (2) We can not go to “A” or “B” from “C” (Note: rank leak is a special case of rank sink, but their solutions are different.) A B C A B C

15 Rank Leak Pages which have no outgoing links Called a rank leak because all importance will “leak” out of the web ---- all pages’ importance are zero Solution: Assume B have links to all web pages with equal probability. A C B

16 Rank Sink A group of pages with no links out of the group Called a rank sink because this group will accumulate all importance of the web, the importance of all other pages which do not belong to this group will be zero. A C B

17 Rank Sink Solution Original Markov chain is: R n+1 = M * R n (NO “d” !) Practical pagerank formula Why need “d”? (1) Intuition in the random surfer mode -- d is the probability of jumping from page “A” to page “B” following the links in page “A” -- 1-d is the probability of jumping to a random page on the web instead of following a link in current page (2) Actually imply to build a not bipartite and strongly connected graph, therefore we can satisfy condition (2) and (3)

18 build not bipartite & strongly connected graph The original graph is bipartite and not strongly connected graph. We can not from “b” to “a” and “f”; can not from “d”, “f” to any other pages, and so on. By introducing “d”, the graph is a strongly connected and not bipartite graph. For example, there is 1-d chance from “b” to “a”, “b” or “f”. We can go to any page form current position. a b c d f Using “d” a b c d f

19 Summary of PageRank Build a matrix basing on the pages and the links in those pages. By introducing “d”, make the graph is a not bipartite and strongly connected graph Therefore the system is a Markov chain with a stationary distribution Using recursive way to solve the function:

20 Dynamic -- Hubs & Authorities Authority: a page that offers info about a topic Hub: a page that doesn ’ t provide much info, but tell us where to find pages about a topic Good hub: page that points to many good authorities. Good authority: page pointed to by many good hubs

21 Goal -- Hubs & Authorities Goal: Given Keyword Query, assume there are a set of pages P that match this query, calculate a hub and an authoritative value to each page in set P instead of the whole web. Pages with high authority are results of query (to find good sources of content).

22 Build A Subgraph Find pages S containing the keyword, and using set S to build a subgraph Find all pages these S pages point to, i.e., their forward neighbors. Find all pages that point to S pages, i.e., their backward neighbors Compute this subgraph Query Results = Start Set Forward Set Back Set Result 1 Result 2 Result n f1f1 f2f2 fsfs... b1b1 b2b2 bmbm …

23 Computing Hubs and Authorities (3)(4) For all pages (1) Number the pages{1,2,…n} (2) define their adjacency matrix M to be the n*n matrix where M ij = 1 if page i links to page j, and is 0 otherwise. (3) Define A=(a 1,a 2,…,a n ) and H=(h 1,h 2,…,h n ). For each page p, it has a non-negative authority weight a p and a non-negative hub weight h p. (2)(1) v1v1 pv2v2 v3v3 h(v 2 ) h(v 3 ) h(v 1 ) q1q1 p a(q 1 ) q2q2 q3q3 a(q 2 ) a(q 3 )

24 Example(1) X YZ (1) (2) AMH ii * 1   HMA i T i * 1                  011 100 111 M XYZ X Y Z After each iteration of operation (1) and (2), normalize H and A: (Otherwise, values will keep increasing)

25 Example(2) Iteration 0 1 Norm …… converge X YZ

26 Proof converge Theorem 3.1 The sequences x1, x2, x3,... and y1, y2, y3,... converge Proof. Let G = (V,E), with V = {p1, p2,..., pn}, and let A denote the adjacency matrix of the graph G; the (i, j)th entry of A is equal to 1 if (p i, p j ) is an edge of G, and is equal to 0 otherwise. One easily verifies that the I and O operations can be written x  A T y and y  Ax respectively. Thus x k is the unit vector in the direction of (A T A) k−1 A T z, and y k is the unit vector in the direction of (AA T ) k z. Now, a standard result of linear algebra states that if M is a symmetric n × n matrix, and v is a vector not orthogonal to the principal eigenvector ω 1 (M), then the unit vector in the direction of M k v converges to ω 1 (M) as k increases without bound. Also (as a corollary), if M has only non-negative entries, then the principal eigenvector of M has only non-negative entries. ----- “Authoritative Sources in a Hyperlinked Environment” Jon M. Kleinberg

27 PageRank v.s. Authorities PageRank (Google) –Query-independent : computed for all web pages stored in the database prior to the query –Quality only depends on all web pages stored in the database –computes authorities only –Trivial and fast to compute HITS (CLEVER) –Query-dependent: performed on the set of retrieved web pages for each query –Quality depends on quality of start set –computes authorities and hubs –easy to compute, but real-time execution is hard

28 Reference http://www.google.com/technology/ Sergey Brin and Larry Page. The Anatomy of a Large-Scale Hypertextual Web Search Engine Larry Page, Sergey Brin, R. Motwani, T. Winograd (1998) The PageRank Citation Ranking: Bringing Order to the Web Chris Ridings, Mike Shishigin, and Jill Whalen(2002) PageRank Uncovered MONICA BIANCHINI, MARCO GORI, and FRANCO SCARSELLI(2000) Inside PageRank Amy N. Langville and Carl D. Meyer(2004) Deeper Inside PageRank Jon M. Kleinberg(1997) Authoritative Sources in a Hyperlinked Environment AYMAN FARAHATz, THOMAS LOFAROx{, JOEL C. MILLERk, GREGORY RAE, AND LESLEY A. WARD(2001) AUTHORITY RANKINGS FROM HITS, PAGERANK, AND SALSA: EXISTENCE, UNIQUENESS, AND EFFECT OF INITIALIZATION

29 Reference Alessandro Panconesi DI, La Sapienza of Rome(2005) The Stationary Distribution of a Markov Chain Dean L. Isaacson and Richard W. Madsen. (1976) Markov chains, theory and applications John G. Kemeny and J. Laurie Snell. (2002) Finite Markov chains and Algorithmic Applications Monika Henzinger Hyperlink Analysis on the Web Vagelis Hristidis Random Walks in Ranking Query Results in Semistructured Databases Shang-Hua Teng SVD, Eigenvector, and Web Search G. R. Grimmett and D. R. Stirzaker. Probability and Random Processes. Dragomir R. Radev Information Retrieval

30 THANKS!


Download ppt "PageRank Algorithm -- Bringing Order to the Web (Hu Bin)"

Similar presentations


Ads by Google