Presentation is loading. Please wait.

Presentation is loading. Please wait.

Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman,

Similar presentations


Presentation on theme: "Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman,"— Presentation transcript:

1 Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng

2 Approximating Dense Objects by Sparse Ones Floor joists Image compression

3 Approximating Dense Graphs by Sparse Ones Spanners: Approximate distances to within ® using only = O(n 1+2/ ® ) edges Low-stretch trees: Approximate most distances to within O(log n) using only n-1 edges (n = # vertices)

4 Overview Definitions – Cut & Spectral Sparsifiers – Applications Cut Sparsifiers Spectral Sparsifiers – A random sampling construction – Derandomization

5 Cut Sparsifiers Input: An undirected graph G=(V,E) with weights u : E ! R + Output: A subgraph H=(V,F) of G with weights w : F ! R + such that |F| is small and w( ± H (U)) = (1 § ² ) u( ± G (U)) 8 U µ V weight of edges between U and V\U in Gweight of edges between U and V\U in H UU (Karger ‘94)

6 Cut Sparsifiers Input: An undirected graph G=(V,E) with weights u : E ! R + Output: A subgraph H=(V,F) of G with weights w : F ! R + such that |F| is small and w( ± H (U)) = (1 § ² ) u( ± G (U)) 8 U µ V weight of edges between U and V\U in Gweight of edges between U and V\U in H (Karger ‘94)

7 Generic Application of Cut Sparsifiers (Dense) Input graph G Exact/Approx Output (Slow) Algorithm A for some problem P Sparse graph H approx preserving solution of P Algorithm A (now faster) Approximate Output (Efficient) Sparsification Algorithm S Min s-t cut, Sparsest cut, Max cut, …

8 Relation to Expander Graphs Graph H on V is an expander if, for some constant c, | ± H (U)| ¸ c |U| 8 U µ V, |U| · n/2 Let G be the complete graph on V. If we give all edges of H weight w=n, then w( ± H (U)) ¸ c n |U| ¼ c | ± G (U)| 8 U µ V, |U| · n/2 Expanders are similar to sparsifiers of complete graph HG

9 Relation to Expander Graphs Simple Random Construction: Erdos-Renyi graph G np is an expander if p= £ (log(n)/n), with high probability. This gives an expander with £ (n log n) edges with high probability. But aren’t there much better expanders? HG

10 Spectral Sparsifiers Input: An undirected graph G=(V,E) with weights u : E ! R + Def: The Laplacian is the matrix L G such that x T L G x =  st 2 E u st (x s -x t ) 2 8 x 2 R V. L G is positive semidefinite since this is ¸ 0. Example: Electrical Networks – View edge st as resistor of resistance 1/u st. – Impose voltage x v at every vertex v. – Ohm’s Power Law: P = V 2 /R. – Power consumed on edge st is u st (x s -x t ) 2. – Total power consumed is x T L G x. (Spielman-Teng ‘04)

11 Spectral Sparsifiers Input: An undirected graph G=(V,E) with weights u : E ! R + Def: The Laplacian is the matrix L G such that x T L G x =  st 2 E u st (x s -x t ) 2 8 x 2 R V. Output: A subgraph H=(V,F) of G with weights w : F ! R such that |F| is small and x T L H x = (1 § ² ) x T L G x 8 x 2 R V w( ± H (U)) = (1 § ² ) u( ± G (U)) 8 U µ V Spectral Sparsifier Cut Sparsifier ) ) (Spielman-Teng ‘04) Restrict to {0,1}-vectors

12 Cut vs Spectral Sparsifiers Number of Constraints: – Cut: w( ± H (U)) = (1 § ² ) u( ± G (U)) 8 U µ V (2 n constraints) – Spectral: x T L H x = (1 § ² ) x T L G x 8 x 2 R V ( 1 constraints) Spectral constraints are SDP feasibility constraints: (1- ² ) x T L G x · x T L H x · (1+ ² ) x T L G x 8 x 2 R V, (1- ² ) L G ¹ L H ¹ (1+ ² ) L G Spectral constraints are actually easier to handle – Checking “Is H is a spectral sparsifier of G?” is in P – Checking “Is H is a cut sparsifier of G?” is non-uniform sparsest cut, so NP-hard Here X ¹ Y means Y-X is positive semidefinite

13 Application of Spectral Sparsifiers Consider the linear system L G x = b. Actual solution is x := L G -1 b. Instead, compute y := L H -1 b, where H is a spectral sparsifier of G. We know: (1- ² ) L G ¹ L H ¹ (1+ ² ) L G ) y has low multiplicative error: k y-x k L G · 2 ² k x k L G Computing y is fast since H is sparse: conjugate gradient method takes O(n|F|) time (where |F| = # nonzero entries of L H )

14 Application of Spectral Sparsifiers Consider the linear system L G x = b. Actual solution is x := L G -1 b. Instead, compute y := L H -1 b, where H is a spectral sparsifier of G. We know: (1- ² ) L G ¹ L H ¹ (1+ ² ) L G ) y has low multiplicative error: k y-x k L G · 2 ² k x k L G Theorem: [Spielman-Teng ‘04, Koutis-Miller-Peng ‘10] Can compute a vector y with low multiplicative error in O(m log n (log log n) 2 ) time. (m = # edges of G)

15 Results on Sparsifiers Cut SparsifiersSpectral Sparsifiers Combinatorial Linear Algebraic Karger ‘94 Benczur-Karger ‘96 Fung-Hariharan- Harvey-Panigrahi ‘11 Spielman-Teng ‘04 Spielman-Srivastava ‘08 Batson-Spielman-Srivastava ‘09 de Carli Silva-Harvey-Sato ‘11 Construct sparsifiers with n log O(1) n / ² 2 edges, in nearly linear time Construct sparsifiers with O(n/ ² 2 ) edges, in poly(n) time

16 Sparsifiers by Random Sampling The complete graph is easy! Random sampling gives an expander (ie. sparsifier) with O(n log n) edges.

17 Sparsifiers by Random Sampling Can’t sample edges with same probability! Idea [BK’96] Sample low-connectivity edges with high probability, and high-connectivity edges with low probability Keep this Eliminate most of these

18 Non-uniform sampling algorithm [BK’96] Input: Graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) with weights w : F ! R + Choose parameter ½ Compute probabilities { p e : e 2 E } For i=1 to ½ For each edge e 2 E With probability p e, Add e to F Increase w e by u e /( ½ p e ) Note: E[|F|] · ½ ¢  e p e Note: E[ w e ] = u e 8 e 2 E ) For every U µ V, E[ w( ± H (U)) ] = u( ± G (U)) Can we do this so that the cut values are tightly concentrated and E[|F|]=n log O(1) n?

19 Benczur-Karger ‘96 Input: Graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) with weights w : F ! R + Choose parameter ½ Compute probabilities { p e : e 2 E } For i=1 to ½ For each edge e 2 E With probability p e, Add e to F Increase w e by u e /( ½ p e ) Can we do this so that the cut values are tightly concentrated and E[|F|]=n log O(1) n? Set ½ = O(log n/ ² 2 ). Let p e = 1/“strength” of edge e. Cuts are preserved to within (1 § ² ) and E[|F|] = O(n log n/ ² 2 ) Can approximate all values in m log O(1) n time. But what is “strength”? Can’t we use “connectivity”?

20 Fung-Hariharan-Harvey-Panigrahi ‘11 Input: Graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) with weights w : F ! R + Choose parameter ½ Compute probabilities { p e : e 2 E } For i=1 to ½ For each edge e 2 E With probability p e, Add e to F Increase w e by u e /( ½ p e ) Can we do this so that the cut values are tightly concentrated and E[|F|]=n log O(1) n? Set ½ = O(log 2 n/ ² 2 ). Let p st = 1/(min cut separating s and t) Cuts are preserved to within (1 § ² ) and E[|F|] = O(n log 2 n/ ² 2 ) Can approximate all values in O(m + n log n) time

21 Overview of Analysis Most cuts hit a huge number of edges ) extremely concentrated ) whp, most cuts are close to their mean

22 Overview of Analysis High connectivity Low sampling probability Low connectivity High sampling probability Hits many red edges ) highly concentrated Hits only one red edge ) poorly concentrated The same cut also hits many green edges ) highly concentrated

23 Summary for Cut Sparsifiers Do non-uniform sampling of edges, with probabilities based on “connectivity” Decomposes graph into “connectivity classes” and argue concentration of all cuts BK’96 used “strength” not “connectivity” Can get sparsifiers with O(n log n / ² 2 ) edges – Optimal for any independent sampling algorithm

24 Spectral Sparsification Input: Graph G=(V,E), weights u : E ! R + Recall: x T L G x =  st 2 E u st (x s -x t ) 2 Goal: Find weights w : E ! R + such that most w e are zero, and (1- ² ) x T L G x ·  e 2 E w e x T L e x · (1+ ² ) x T L G x 8 x 2 R V, (1- ² ) L G ¹  e 2 E w e L e ¹ (1+ ² ) L G General Problem: Given matrices L e satisfying  e L e = L G, find coefficients w e, mostly zero, such that (1- ² ) L G ¹  e w e L e ¹ (1+ ² ) L G Call this x T L st x

25 The General Problem: Sparsifying Sums of PSD Matrices General Problem: Given PSD matrices L e s.t.  e L e = L, find coefficients w e, mostly zero, such that (1- ² ) L ¹  e w e L e ¹ (1+ ² ) L Theorem: [Ahlswede-Winter ’02] Random sampling gives w with O( n log n/ ² 2 ) non-zeros. Theorem: [de Carli Silva-Harvey-Sato ‘11], building on [Batson-Spielman-Srivastava ‘09] Deterministic alg gives w with O( n/ ² 2 ) non-zeros. – Cut & spectral sparsifiers with O(n/ ² 2 ) edges [BSS’09] – Sparsifiers with more properties and O(n/ ² 2 ) edges [dHS’11]

26 Vector Case General Problem: Given PSD matrices L e s.t.  e L e = L, find coefficients w e, mostly zero, such that (1- ² ) L ¹  e w e L e ¹ (1+ ² ) L Vector Case Vector problem: Given vectors v e 2 [0,1] n s.t.  e v e = v, find coefficients w e, mostly zero, such that k  e w e v e - v k 1 · ² Theorem [Althofer ‘94, Lipton-Young ‘94]: There is a w with O(log n/ ² 2 ) non-zeros. Proof: Random sampling & Hoeffding inequality. Multiplicative version: There is a w with O(n log n/ ² 2 ) non-zeros such that (1- ² ) v ·  e w e v e · (1+ ² ) v

27 Concentration Inequalities Theorem: [Chernoff ‘52, Hoeffding ‘63] Let Y 1,…,Y k be i.i.d. random non-negative real numbers s.t. E[ Y i ] = Z and Y i · uZ. Then Theorem: [Ahlswede-Winter ‘02] Let Y 1,…,Y k be i.i.d. random PSD n x n matrices s.t. E[ Y i ] = Z and Y i ¹ uZ. Then The only difference

28 “Balls & Bins” Example Problem: Throw k balls into n bins. Want (max load) / (min load) · 1+ ². How big should k be? AW Theorem: Let Y 1,…,Y k be i.i.d. random PSD matrices such that E[ Y i ] = Z and Y i ¹ uZ. Then Solution: Let Y i be all zeros, except for a single n in a random diagonal entry. Then E[ Y i ] = I, and Y i ¹ n I. Set k = £ (n log n / ² 2 ). Whp, every diagonal entry of  i Y i /k is in [1- ²,1+ ² ].

29 Solving the General Problem General Problem: Given PSD matrices L e s.t.  e L e = L, find coefficients w e, mostly zero, such that (1- ² ) L ¹  e w e L e ¹ (1+ ² ) L AW Theorem: Let Y 1,…,Y k be i.i.d. random PSD matrices such that E[ Y i ] = Z and Y i ¹ uZ. Then To solve General Problem with O(n log n/ ² 2 ) non-zeros Repeat k:= £ (n log n / ² 2 ) times Pick an edge e with probability p e := Tr(L e L G -1 ) / n Increment w e by 1/k ¢ p e

30 Derandomization Vector problem: Given vectors v e 2 [0,1] n s.t.  e v e = v, find coefficients w e, mostly zero, such that k  e w e v e - v k 1 · ² Theorem [Young ‘94]: The multiplicative weights method deterministically gives w with O(log n/ ² 2 ) non-zeros – Or, use pessimistic estimators on the Hoeffding proof General Problem: Given PSD matrices L e s.t.  e L e = L, find coefficients w e, mostly zero, such that (1- ² ) L ¹  e w e L e ¹ (1+ ² ) L Theorem [de Carli Silva-Harvey-Sato ‘11]: The matrix multiplicative weights method (Arora-Kale ‘07) deterministically gives w with O(n log n/ ² 2 ) non-zeros – Or, use matrix pessimistic estimators (Wigderson-Xiao ‘06)

31 MWUM for “Balls & Bins” 01 ¸ values: l u Let ¸ i = load in bin i. Initially ¸ =0. Want: 1 · ¸ i and ¸ i · 1. Introduce penalty functions “exp( l - ¸ i )” and “exp( ¸ i -u)” Find a bin ¸ i to throw a ball into such that, increasing l by ± l and u by ± u, the penalties don’t grow.  i exp( l+ ± l - ¸ i ’) ·  i exp( l - ¸ i )    i exp( ¸ i ’-(u+ ± u )) ·  i exp( ¸ i -u) Careful analysis shows O(n log n/ ² 2 ) balls is enough

32 MMWUM for General Problem 01 ¸ values: l u Let A=0 and ¸ its eigenvalues. Want: 1 · ¸ i and ¸ i · 1. Use penalty functions Tr exp( l I -A) and Tr exp(A-u I ) Find a matrix L e such that adding ® L e to A, increasing l by ± l and u by ± u, the penalties don’t grow. Tr exp(( l+ ± l ) I - (A+ ® L e )) · Tr exp( l I -A)  Tr exp((A+ ® L e )-(u+ ± u ) I ) · Tr exp(A-u I ) Careful analysis shows O(n log n/ ² 2 ) matrices is enough

33 Beating Sampling & MMWUM 01 ¸ values: l u To get a better bound, try changing the penalty functions to be steeper! Use penalty functions Tr ( A- l I ) -1 and Tr (u I -A ) -1 Find a matrix L e such that adding ® L e to A, increasing l by ± l and u by ± u, the penalties don’t grow. Tr ((A+ ® L e )-( l+ ± l ) I ) -1 · Tr (A- l I ) -1  Tr ((u+ ± u ) I - (A+ ® L e )) -1 · Tr (u I - A) -1 All eigenvalues stay within [ l, u]

34 Beating Sampling & MMWUM To get a better bound, try changing the penalty functions to be steeper! Use penalty functions Tr ( A- l I ) -1 and Tr (u I -A ) -1 Find a matrix L e such that adding ® L e to A, increasing l by ± l and u by ± u, the penalties don’t grow. Tr ((A+ ® L e )-( l+ ± l ) I ) -1 · Tr (A- l I ) -1  Tr ((u+ ± u ) I - (A+ ® L e )) -1 · Tr (u I - A) -1 General Problem: Given PSD matrices L e s.t.  e L e = L, find coefficients w e, mostly zero, such that (1- ² ) L ¹  e w e L e ¹ (1+ ² ) L Theorem: [Batson-Spielman-Srivastava ‘09] in rank-1 case, [de Carli Silva-Harvey-Sato ‘11] for general case This gives a solution w with O( n/ ² 2 ) non-zeros.

35 Applications Theorem: [de Carli Silva-Harvey-Sato ‘11] Given PSD matrices L e s.t.  e L e = L, there is an algorithm to find w with O( n/ ² 2 ) non-zeros such that (1- ² ) L ¹  e w e L e ¹ (1+ ² ) L Application 1: Spectral Sparsifiers with Costs Given costs on edges of G, can find sparsifier H whose cost is at most (1+ ² ) the cost of G. Application 2: Sparse SDP Solutions min { c T y :  i y i A i º B, y ¸ 0 } where A i ’s and B are PSD has nearly optimal solution with O(n/ ² 2 ) non-zeros.

36 Open Questions Sparsifiers for directed graphs More constructions of sparsifiers with O(n/ ² 2 ) edges. Perhaps randomized? Iterative construction of expander graphs More control of the weights w e A combinatorial proof of spectral sparsifiers More applications of our general theorem


Download ppt "Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman,"

Similar presentations


Ads by Google