Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2013 A. Haeberlen, Z. Ives NETS 212: Scalable and Cloud Computing 1 University of Pennsylvania PageRank and Adsorption October 17, 2013.

Similar presentations


Presentation on theme: "© 2013 A. Haeberlen, Z. Ives NETS 212: Scalable and Cloud Computing 1 University of Pennsylvania PageRank and Adsorption October 17, 2013."— Presentation transcript:

1 © 2013 A. Haeberlen, Z. Ives NETS 212: Scalable and Cloud Computing 1 University of Pennsylvania PageRank and Adsorption October 17, 2013

2 © 2013 A. Haeberlen, Z. Ives Announcements HW3 is available on the web page Due October 31st at 10:00pm EDT Pre-/postpone deadline? Andreas will be out of town next week (IMC) No class on Tuesday or Thursday; please work on HW2+HW3 No office hours on Wednesday (but keep an eye on Piazza) Catch-up class on Oct 30, 4:30-6:00pm, in DRLB A5 Final project: Mini-Facebook application Two-person team project, due at the end of the semester Specifications will be available soon Please start thinking about a potential team member Once the spec is out, please send me an and tell me who is on your team (two-person teams only!) 2 University of Pennsylvania

3 © 2013 A. Haeberlen, Z. Ives Examples from last year 3 University of Pennsylvania

4 © 2013 A. Haeberlen, Z. Ives Some 'lessons learned' from last year The most common mistakes were: Started too late; tried to do everything at the last minute You need to leave enough time at the end to do debugging etc Underestimated amount of integration work Suggestion: Define clean interfaces, build dummy components for testing, exchange code early and throughout the project Suggestion: Work together from the beginning! (Pair programming?) Unbalanced team You need to pick your teammate wisely, make sure they pull their weight, keep them motivated,... You and your teammate should have compatible goals (Example: Go for the Facebook award vs. get a passing grade) Started coding without a design FIRST make a plan: what pages will there be? how will the user interact with them? how will the interaction between client and server work? what components will the program have? what tables will there be in the database, and what fields should be in them? etc. 4 University of Pennsylvania

5 © 2013 A. Haeberlen, Z. Ives Facebook award Facebook is sponsoring an award for the best final project One Facebook bag for each team member Facebook T-shirts Winners will be announced on the course web page ("NETS212 Hall of Fame") Similar to: 5 University of Pennsylvania

6 © 2013 A. Haeberlen, Z. Ives Plan for today PageRank Different formulations: Iterative, Matrix Complications: Sinks and hogs Implementation in MapReduce Adsorption Label propagation Implementation in MapReduce 6 University of Pennsylvania NEXT

7 © 2013 A. Haeberlen, Z. Ives Why link analysis? Suppose a search engine processes a query for "team sports" Problem: Millions of pages contain these words! Which ones should we return first? Idea: Hyperlinks encode a considerable amount of human judgment What does it mean when a web page links another page? Intra-domain links: Often created primarily for navigation Inter-domain links: Confer some measure of authority So, can we simply boost the rank of pages with lots of inbound links? 7 University of Pennsylvania

8 © 2013 A. Haeberlen, Z. Ives Problem: Popularity  relevance! 8 University of Pennsylvania “A-Team” page Hollywood “Series to Recycle” page Yahoo directory Wikipedia Mr. T’s page Team Sports Cheesy TV shows page Shouldn't links from Yahoo and Wikipedia "count more"?

9 © 2013 A. Haeberlen, Z. Ives Other applications This question occurs in several other areas: How do we measure the "impact" of a researcher? (#papers? #citations?) Who are the most "influential" individuals in a social network? (#friends?) Which programmers are writing the "best" code? (#uses?)... 9 University of Pennsylvania

10 © 2013 A. Haeberlen, Z. Ives PageRank: Intuition Imagine a contest for The Web's Best Page Initially, each page has one vote Each page votes for all the pages it has a link to To ensure fairness, pages voting for more than one page must split their vote equally between them Voting proceeds in rounds; in each round, each page has the number of votes it received in the previous round In practice, it's a little more complicated - but not much! 10 University of Pennsylvania A B E C D F G H I J Shouldn't E's vote be worth more than F's? How many levels should we consider?

11 © 2013 A. Haeberlen, Z. Ives PageRank Each page i is given a rank x i Goal: Assign the x i such that the rank of each page is governed by the ranks of the pages linking to it: 11 University of Pennsylvania Rank of page j Rank of page i Every page j that links to i Number of links out from page j How do we compute the rank values?

12 © 2013 A. Haeberlen, Z. Ives Random Surfer Model PageRank has an intuitive basis in random walks on graphs Imagine a random surfer, who starts on a random page and, in each step, with probability d, klicks on a random link on the page with probability 1-d, jumps to a random page (bored?) The PageRank of a page can be interpreted as the fraction of steps the surfer spends on the corresponding page Transition matrix can be interpreted as a Markov Chain 12 University of Pennsylvania

13 © 2013 A. Haeberlen, Z. Ives 13 Iterative PageRank (simplified) Initialize all ranks to be equal, e.g.: Iterate until convergence University of Pennsylvania No need to decide how many levels to consider!

14 © 2013 A. Haeberlen, Z. Ives 14 Example: Step Initialize all ranks to be equal University of Pennsylvania

15 © 2013 A. Haeberlen, Z. Ives 15 Example: Step Propagate weights across out-edges University of Pennsylvania

16 © 2013 A. Haeberlen, Z. Ives 16 Example: Step Compute weights based on in-edges University of Pennsylvania

17 © 2013 A. Haeberlen, Z. Ives 17 Example: Convergence University of Pennsylvania

18 © 2013 A. Haeberlen, Z. Ives 18 Naïve PageRank Algorithm Restated Let N(p) = number outgoing links from page p B(p) = number of back-links to page p Each page b distributes its importance to all of the pages it points to (so we scale by 1/N(b)) Page p’s importance is increased by the importance of its back set University of Pennsylvania

19 © 2013 A. Haeberlen, Z. Ives 19 In Linear Algebra formulation Create an m x m matrix M to capture links: M(i, j) = 1 / n j if page i is pointed to by page j and page j has n j outgoing links = 0 otherwise Initialize all PageRanks to 1, multiply by M repeatedly until all values converge: Computes principal eigenvector via power iteration University of Pennsylvania

20 © 2013 A. Haeberlen, Z. Ives 20 A brief example Google AmazonYahoo g' y’ a’ g y a = * Total rank sums to number of pages g y a = , , , … Running for multiple iterations: University of Pennsylvania

21 © 2013 A. Haeberlen, Z. Ives 21 Oops #1 Google AmazonYahoo g' y’ a’ g y a = * g y a = 0.5 1, , 0 0 0, …, Running for multiple iterations: 'dead end' - PageRank is lost after each round University of Pennsylvania – PageRank sinks

22 © 2013 A. Haeberlen, Z. Ives 22 Oops #2 Google AmazonYahoo g' y’ a’ g y a = * g y a = 0.5 2, , 0 3 0, …, Running for multiple iterations: PageRank cannot flow out and accumulates University of Pennsylvania – PageRank hogs

23 © 2013 A. Haeberlen, Z. Ives 23 Stopping the Hog g' y’ a’ g y a = 0.85 * g y a = , Running for multiple iterations: … though does this seem right? Google AmazonYahoo ,,, …, University of Pennsylvania

24 © 2013 A. Haeberlen, Z. Ives 24 Improved PageRank Remove out-degree 0 nodes (or consider them to refer back to referrer) Add decay factor d to deal with sinks Typical value: d=0.85 Intuition in the idea of the “random surfer”: Surfer occasionally stops following link sequence and jumps to new random page, with probability 1 - d University of Pennsylvania

25 © 2013 A. Haeberlen, Z. Ives PageRank on MapReduce Inputs Of the form: page  (currentWeightOfPage, {adjacency list}) Map Page p “propagates” 1/N p of its d * weight(p) to the destinations of its out-edges (think like a vertex!) Reduce p-th page sums the incoming weights and adds (1-d), to get its weight’(p) Iterate until convergence Common practice: run some fixed number of times, e.g., 25x Alternatively: Test after each iteration with a second MapReduce job, to determine the maximum change between old and new weights 25

26 © 2013 A. Haeberlen, Z. Ives Plan for today PageRank Different formulations: Iterative, Matrix Complications: Sinks and hogs Implementation in MapReduce Adsorption Label propagation Implementation in MapReduce 26 University of Pennsylvania NEXT

27 © 2013 A. Haeberlen, Z. Ives YouTube Suggestions 27 What can we leverage to make such recommendations?

28 © 2013 A. Haeberlen, Z. Ives Co-views: Video-video 28 Idea #1: Keep track of which videos are frequently watched together If many users watched both video A and video B, then A should be recommended to users who have viewed B, and vice versa If there are many such videos, how can we rank them?

29 © 2013 A. Haeberlen, Z. Ives Co-Views: User-video 29 Idea #2: Leverage simi- larities between users If Alice and Bob have both watched videos A, B, and C, and Alice has additionally watched video D, then perhaps D will interest Bob too? How can we see that in the graph? Short path between two videos Several paths between 2 videos Paths that avoid high-degree nodes in the graph (why?) A B C D E Users Videos

30 © 2013 A. Haeberlen, Z. Ives More sophisticated link analysis PageRank computes a stationary distribution for the random walk: the probability is independent of where you start One authority score for every page But here we want to know how likely we are to end up at video j given that we started from user i e.g., what are the odds that user i will like video j? this is a probability conditioned on where you start 30

31 © 2013 A. Haeberlen, Z. Ives Video-video and user-video combined Our task: Take the video-video co-views and the user-video co-views (potentially annotated with weights) Assign to each video a score for each user, indicating the likelihood the user will want to view the video 31 Alice Bob Vietnam Tr. Globe Trekker SE Asia Tour ?

32 © 2013 A. Haeberlen, Z. Ives Adsorption: Label propagation Adsorption: Adhesion of atoms, ions, etc. to a surface The adsorption algorithm attempts to “adhere” labels and weights to various nodes, establishing a connection There are three equivalent formulations of the method: A random walk algorithm that looks much like PageRank An averaging algorithm that is easily MapReduced This is the one we'll focus on A linear systems formulation 32

33 © 2013 A. Haeberlen, Z. Ives Adsorption as an iterative average Easily MapReducible Pre-processing step: Create a series of labels L, one for each user or entity Take the set of vertices V For each label l in L: Designate an “origin” node v l (typically given node label l) Annotate it with the label l and weight 1.0 Much like what we do in PageRank to start 33

34 © 2013 A. Haeberlen, Z. Ives Adsorption: Propagating labels Iterative process: Compute the likelihood for each vertex v that a user x, in a random walk, will arrive at v – call this the probability of l x  L associated with node v Bob Alice Vietnam Trekking Global Trekking SE Asia Tour A: 1.0 B: 1.0

35 © 2013 A. Haeberlen, Z. Ives Adsorption: Propagating labels Iterative process: Compute the likelihood for each vertex v that a user x, in a random walk, will arrive at v – call this the probability of l x  L associated with node v Bob Alice Vietnam Trekking Global Trekking SE Asia Tour B: 0.8 A: 0.6 A: 0.4 B: 0.2 Distribute labels + weights to edges

36 © 2013 A. Haeberlen, Z. Ives Adsorption: Propagating labels Iterative process: Compute the likelihood for each vertex v that a user x, in a random walk, will arrive at v – call this the probability of l x  L associated with node v Bob Alice Vietnam Trekking Global Trekking SE Asia Tour B: 0.8 A: 0.6 A: 0.4 B: 0.2 Normalize node weights so they sum to 1.0 Assign node labels & weights by summing

37 © 2013 A. Haeberlen, Z. Ives Adsorption: Propagating labels Iterative process: Compute the likelihood for each vertex v that a user x, in a random walk, will arrive at v – call this the probability of l x  L associated with node v Bob Alice Vietnam Trekking Global Trekking SE Asia Tour A: 0.13 B: 0.2 A: 0.3 A: 0.13 B: 0.26 Divide & propagate weights along edges in each direction

38 © 2013 A. Haeberlen, Z. Ives Adsorption: Propagating labels Iterative process: Compute the likelihood for each vertex v that a user x, in a random walk, will arrive at v – call this the probability of l x  L associated with node v Bob Alice Vietnam Trekking Global Trekking SE Asia Tour A: 0.43 A: 0.3 A: 0.13 B: 0.26 B: 0.46 Assign node labels & weights by summing

39 © 2013 A. Haeberlen, Z. Ives Adsorption: Propagating labels Iterative process: Compute the likelihood for each vertex v that a user x, in a random walk, will arrive at v – call this the probability of l x  L associated with node v Bob Alice Vietnam Trekking Global Trekking SE Asia Tour A: 0.17 A: 0.1 A: 0.06 A: 0.03 B: 0.13 B: 0.10 A: 0.06 B: 0.13 A: 0.1 A: 0.26 B: 0.16 A: 0.10 B: 0.36 Repeat...

40 © 2013 A. Haeberlen, Z. Ives Adsorption: Propagating labels Iterative process: Compute the likelihood for each vertex v that a user x, in a random walk, will arrive at v – call this the probability of l x  L associated with node v Bob Alice Vietnam Trekking Global Trekking SE Asia Tour A: 0.33 A: 0.16 A: 0.03 B: 0.13 B: 0.10 A: 0.1 A: 0.36 B: 0.16 B: until convergence

41 © 2013 A. Haeberlen, Z. Ives Adsorption algorithm formulation Inputs: G = (V, E, w) where w : E   ; L: set of labels; V L  V: nodes with labels Repeat foreach v  V do normalize L v to have unit L 1 norm until convergence Output: Distributions {L v | v  V} 41

42 © 2013 A. Haeberlen, Z. Ives Applications of Adsorption Recommendation (YouTube) Discovering relationships among data: Classifying types of objects Finding labels for columns in tables Finding similar / related concepts in different tables or Web pages 42

43 © 2013 A. Haeberlen, Z. Ives Recap and Take-aways We’ve had a whirlwind tour of common kinds of algorithms used on the Web Path analysis: route planning, games, keyword search, etc. Clustering and classification: mining, recommendations, spam filtering, context-sensitive search, ad placement, etc. Link analysis: ranking, recommendations, ad placement Many such algorithms (though not all) have a reasonably straightforward, often iterative, MapReduce formulation 43

44 © 2013 A. Haeberlen, Z. Ives Next time We’ve talked in depth about cloud Platform- as-a-Service capabilities We’ll see in detail how to build Software-as-a- Service Dynamic HTML / AJAX Node.js, and possibly Java servlets 44

45 © 2013 A. Haeberlen, Z. Ives Stay tuned Next time you will learn about: Web programming 45 University of Pennsylvania


Download ppt "© 2013 A. Haeberlen, Z. Ives NETS 212: Scalable and Cloud Computing 1 University of Pennsylvania PageRank and Adsorption October 17, 2013."

Similar presentations


Ads by Google