Presentation is loading. Please wait.

Presentation is loading. Please wait.

Random Walk on Graph t=0 Random Walk Start from a given node at time 0

Similar presentations


Presentation on theme: "Random Walk on Graph t=0 Random Walk Start from a given node at time 0"— Presentation transcript:

1 Random Walk on Graph t=0 Random Walk Start from a given node at time 0
Choose a neighbor randomly (including previous) and move there Repeat until time t = n Q1. Where does this converge to as n  ∞ Q2. How fast does it converge? Q3. What are the implications for different applications?

2 Random Walks on Graphs A =
Node degree ki  move to any neighbor with prob = 1/ki This is a Markov chain! Start at a node i  p(0) = (0,0,…,1,…0,0) p(n) = p(0) An π = π A [where π = limn∞ p(n)] Q: what is π for a random walk on a graph? 1 1/k1 1/k2 1/k3 1/k4 1/k5 A =

3 Random Walks on Undirected Graphs
Stationarity: π(z) = Σxπ(x)p(x,z) p(x,y) = 1/kx Could try to solve these or global balance. Not Easy!! Define N(z): {neighbors of z) Σx ∈ N(z) kx⋅p(x,z) = Σx ∈ N(z) kx⋅(1/kx) = Σx ∈ N(z)1 = kz Normalize by (dividing both sides with) Σxkx Σxkx = 2|E| (|E| = m = # of edges) Σx ∈ N(z) (kx/2|E|)⋅p(x,z) = kz/2|E| π(x) = kx/2|E| is the stationary distribution always satisfies the stationarity eq π(x) = π(x)P

4 What about Random Walks on Directed Graphs?
1/8 4/13 2/13 1/13 Assign each node centrality 1/n (for n nodes)

5 A Problematic Graph Q: What is the problem with this graph?
A: All centrality “points” will eventually go to F and G Solution: when at node i With probability β jump to any (of the total N) node(s) With 1-β jump to a random neighbor of i Q: Does this remind you of something? A: PageRank algorithm! PageRank of node i is the stationary probability for a random walk on this (modified) directed graph factor β in PageRank function avoids this problem by “leaking” some small amount of centrality from each node to all other nodes

6 PageRank Centrality A (bored) web surfer Either surf a linked webpage
PageRank as a Random Walk A (bored) web surfer Either surf a linked webpage with probability 1-β Or surf a random page (e.g. new search) with probability β The probability of ending up at page X, after a large enough time = PageRank of page X! Can generalize PageRank with general β = (β1,β2,…,βn) Undirected network: removing β  degree centrality

7 Applications of RW: Measuring Large Networks
We are interested in studying the properties (degree distribution, path lengths, clustering, connectivity, etc.) of many real networks (Internet, Facebook, YouTube, Flickr, etc.) as this contain many important ($$$) information E.g. to plot degree distribution, we need to crawl the whole network and obtain a “degree value” for each node. This networks might contain millions of nodes!!

8 Online Social Networks (OSNs)
Size Traffic 500 million 2 200 million 9 130 million 12 100 million 43 75 million 10 29 > 1 billion users October 2010 (over 15% of world’s population, and over 50% of world’s Internet users !)

9 This is neither feasible nor practical.
Measuring FaceBook Facebook: 500+M users 130 friends each (on average) 8 bytes (64 bits) per user ID The raw connectivity data, with no attributes: 500 x 130 x 8B = 520 GB To get this data, one would have to download: 100+ TB of (uncompressed) HTML data! This is neither feasible nor practical. Solution: Sampling!

10 Measuring Large Networks (for the mere mortals)
Obtaining complete dataset difficult companies usually unwilling to share data for privacy and performance reasons (e.g. Facebook will ban accounts if it sees extensive crawling) tremendous overhead to measure all (~100TB for Facebook) Representative samples desirable study properties test algorithms

11 Sampling What: How: Topology? Directly? Nodes? Exploration?

12 (1) Breadth-First-Search (BFS)
Starting from a seed, explores all neighbor nodes. Process continues iteratively without replacement. BFS leads to bias towards high degree nodes Lee et al, “Statistical properties of Sampled Networks”, Phys Review E, 2006 Early measurement studies of OSNs use BFS as primary sampling technique i.e [Mislove et al], [Ahn et al], [Wilson et al.]

13 (2) Random Walk (RW) Explores graph one node at a time with replacement Restart from different seeds Or multiple seeds in parallel Does this lead to a good sample??

14 Implications for Random Walk Sampling
Say, we collect a small part of the Facebook graph using RW Higher chance to visit high-degree nodes High-degree nodes overrepresented Low-degree nodes under-represented sampled degree distribution 2? Random Walk (RW): Real degree distribution sampled degree distribution 1? [1] M. Gjoka, M. Kurant, C. T. Butts and A. Markopoulou, “Walking in Facebook: A Case Study of Unbiased Sampling of OSNs”, INFOCOM 2010.

15 Random Walk Sampling of Facebook
sampled real Real average node degree: 94 Observed average node degree: 338 Q: How can we fix this? A: Intuition  Need to reduce (increase) the probability of visiting high (low) degree nodes

16 Markov Chain Monte Carlo (MCMC)
Q:How should we modify the Random Walk? A: Markov Chain Monte Carlo theory Original chain: move xy with prob Q(x,y) Stationary distribution π(x) Desired chain: Stationary distribution w(x) (for uniform sampling: w(x) = 1/N) New transition probabilities

17 MCMC (2) a(x,y): probability of accepting proposed move
Q: How should we choose a(x,y) so as to converge to the desired stationary distribution w(x)? A: w(x) station. distr.  w(x)P(x,y) = w(y)P(y,x) (for all x,y) Q: Why? Local balance (time-reversibility) equations w(x)Q(x,y)a(x,y) = w(y)Q(y,x)a(y,x) (denote b(x,y) = b(y,x)) a(x,y) ≤ 1 (probability)  b(x,y) ≤ w(x)Q(x,y) b(x,y) = b(y,x) ≤ w(y)Q(y,x)

18 MCMC for Uniform Sampling
w(x) = w(y) (= 1/n…doesn’t really matter) Q(y,x)/Q(x,y) = kx/ky Metropolis-Hastings random walk: Move to lower degree node  always accepted Move to higher degree node  reject with prob related to degree ratio

19 Metropolis-Hastings (MH) Random Walk
Explore graph one node at a time with replacement In the stationary distribution

20 Degree Distribution of FB with MHRW
Sampled degree distribution almost identical to real one MCMC methods have MANY other applications Sampling Optimization

21 Node Importance: Who is most “central”?

22 Node Centrality: Depends on Application
Influence: Which social network nodes should I pick to advertise/spread a video/product/opinion? Resilience: Which node(s) should I attack to disconnect the network? Malware/Virus Infection: Which nodes should I immunize (e.g. upload a patch) to stop a given Internet “worm” from spreading quickly? Performance: Which nodes are the bottleneck in a network? Search Engines: Which nodes contain the most relevant information? A centrality measure implicitly solves some optimization problem

23 Centrality: Importance based on network position
In each of the following networks, X has higher centrality than Y according to a particular measure indegree outdegree betweenness closeness

24 Degree Centrality He who has many friends is most important.
When is the number of connections the best centrality measure? people who will do favors for you people you can talk to (influence set, information access, …) influence of an article in terms of citations (using in-degree)

25 Normalized Degree Centrality
divide by the max. possible, i.e. (N-1)

26 Betweeness Centrality: Definition
paths between j and k that pass through i betweenness of vertex i all paths between j and k Where gjk = the number of shortest paths connecting j-k, and gjk = the number that node i is on. Usually normalized by:

27 betweenness on toy networks
non-normalized version: bridge

28 Betweeness vs. Degree Centrality
Nodes are sized by degree, and colored by betweenness. Can you spot nodes with high betweenness but relatively low degree? What about high degree but relatively low betweenness?

29 Why is Betweeness Centrality Important?
Connectivity Remove random node Remove high degree node Remove high betweeness node

30 Why is Betweeness Centrality Important?
The network below is a wireless network (e.g. sensor network) Nodes run on battery  total energy Emax Each node picks a destination randomly and sends data at constant rate every packet going through a node spends E of its energy Q: How long would it take until the first node dies out of battery? D1 S1 D2 S2

31 How About in This Network?

32 Why is Betweeness Centrality Important?
Monitoring Where would you place a traffic monitor in order to track the maximum number of packets (if this was your university network)? Where would you place traffic cameras if that was a street network?

33 Why is Betweeness Centrality Important?
Traffic Flow: Each link has capacity 1 Q: What is the maximum throughput between S-D? A: Max Flow – Min Cut theorem  max flow equal to min number of links removed to disconnect S-D  S-D throughput = 1 S D

34 Spectral Analysis of (ergodic) Markov Chains
If a Markov Chain (defined by transition matrix P) is ergodic (irreducible, aperiodic, and positive recurrent) P(n)ik  πk and π = [π1, π2,…, πn] Q: But how fast does the chain converge? E.g. how many steps until we are “close enough” to π A: This depends on the eigenvalues of P The convergence time is also called the mixing time

35 Eigenvalues and Eigenvectors of matrix P
Left Eigenvectors A row vector π is a left eigenvector for eigenvalue λ of matrix P iff πP = λπ  Σk πk pki = λπi Right Eigenvectors A column vector v is a right eigenvector for eigenvalue λ of matrix P iff Pv = λv  Σk pik vk = λvi Q: What eigenvalues and eigenvectors can we guess already? A: λ = 1 is a left eigenvalue with eigenvector π the stationary distr. λ = 1 is a right eigenvalue with eigenvector v=1 (all 1s)

36 Eigenvalues and Eigenvectors for 2-state Chains
Both sets have non-zero solutions  (P-λI) is singular There exists v ≠ 0 such that (P-λI)v = 0 Determinant |P-λI| = 0 (p11- λ)(p22- λ)-p12p21 = 0 λ1=1, λ2 = 1 – p12 – p21 (replace above and confirm using some algebra) |λ2| < 1 (normalized: π(1) to be a stationary distribution AND v(i) ∙π(i) = 1, ∀i)

37 Diagonalization Eigenvalue decomposition: P = U Λ U-1 Q: What is P(n)?
=> Q: How fast does the chain converge to stationary distrib.? A: It converges exponentially fast in n, as (λ2)n

38 Generalization for M-state Markov Chains
We’ll assume that there are M distinct eigenvalues (see notes for repeated ones) Matrix P is stochastic  all eigenvalues |λi| ≤ 1 Q: Why? A: Q: How fast does an (ergodic) chain converge to stationary distribution? A: Exponentially with rate equal to 2nd largest eigenvalue

39 Speed of Sampling on this Network?
39 λ2 (2nd largest eigenvalue) related to (balanced) min-cut of the graph The more “partitioned” a graph is into clusters with few links between them  the longer the convergence time for the respective MC  the slower the random walk search

40 Community Detection - Clustering

41 Faloutsos, Tong Laplacian 4 1 L= D-A= 2 3 Diagonal matrix, dii=di

42 Faloutsos, Tong Weighted Laplacian 4 10 1 4 0.3 2 3 2

43 Laplacian: fast facts If k connected components, Fiedler (‘73) called
so, zero is an eigenvalue If k connected components, Fiedler (‘73) called “algebraic connectivity of a graph” The further from 0, the more connected.

44 G(V,E) L= eig(L)= Connected Components 1 2 3 4 6 7 5
Faloutsos, Tong Connected Components G(V,E) 1 2 3 L= 4 6 #zeros = #components 7 5 eig(L)=

45 G(V,E) L= eig(L)= Connected Components 1 2 3 4 6 7 5 0.01
Faloutsos, Tong Connected Components G(V,E) 1 2 3 L= 0.01 4 6 #zeros = #components Indicates a “good cut” 7 5 eig(L)=

46 Spectral Image Segmentation (Shi-Malik ‘00)

47 The second eigenvector

48 Second Eigenvector’s sparsest cut


Download ppt "Random Walk on Graph t=0 Random Walk Start from a given node at time 0"

Similar presentations


Ads by Google