Sparse Approximations

Slides:



Advertisements
Similar presentations
Subspace Embeddings for the L1 norm with Applications Christian Sohler David Woodruff TU Dortmund IBM Almaden.
Advertisements

Vertex sparsifiers: New results from old techniques (and some open questions) Robert Krauthgamer (Weizmann Institute) Joint work with Matthias Englert,
Great Theoretical Ideas in Computer Science
Submodular Set Function Maximization A Mini-Survey Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
The Solution of the Kadison-Singer Problem Adam Marcus (Crisply, Yale) Daniel Spielman (Yale) Nikhil Srivastava (MSR/Berkeley) IAS, Nov 5, 2014.
C&O 355 Mathematical Programming Fall 2010 Lecture 6 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Matroid Bases and Matrix Concentration
TexPoint fonts used in EMF.
C&O 355 Mathematical Programming Fall 2010 Lecture 9
Deterministic vs. Non-Deterministic Graph Property Testing Asaf Shapira Tel-Aviv University Joint work with Lior Gishboliner.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Solving Laplacian Systems: Some Contributions from Theoretical Computer Science Nick Harvey UBC Department of Computer Science.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
The Combinatorial Multigrid Solver Yiannis Koutis, Gary Miller Carnegie Mellon University TexPoint fonts used in EMF. Read the TexPoint manual before you.
1/26 Constructive Algorithms for Discrepancy Minimization Nikhil Bansal (IBM)
The Randomization Repertoire Rajmohan Rajaraman Northeastern University, Boston May 2012 Chennai Network Optimization WorkshopThe Randomization Repertoire1.
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Matrix Concentration Nick Harvey University of British Columbia TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Dependent Randomized Rounding in Matroid Polytopes (& Related Results) Chandra Chekuri Jan VondrakRico Zenklusen Univ. of Illinois IBM ResearchMIT.
Analysis of Network Diffusion and Distributed Network Algorithms Rajmohan Rajaraman Northeastern University, Boston May 2012 Chennai Network Optimization.
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey U. Waterloo Department of Combinatorics and Optimization Joint work with Isaac.
Learning Submodular Functions Nick Harvey University of Waterloo Joint work with Nina Balcan, Georgia Tech.
Graph Sparsifiers: A Survey Nick Harvey Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman,
Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman,
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey University of Waterloo Department of Combinatorics and Optimization Joint.
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey U. Waterloo C&O Joint work with Isaac Fung TexPoint fonts used in EMF. Read.
Max-Min Fair Allocation of Indivisible Goods Amin Saberi Stanford University Joint work with Arash Asadpour TexPoint fonts used in EMF. Read the TexPoint.
Sparsest Cut S S  G) = min |E(S, S)| |S| S µ V G = (V, E) c- balanced separator  G) = min |E(S, S)| |S| S µ V c |S| ¸ c ¢ |V| Both NP-hard.
Algebraic Structures and Algorithms for Matching and Matroid Problems Nick Harvey.
Ramanujan Graphs of Every Degree Adam Marcus (Crisply, Yale) Daniel Spielman (Yale) Nikhil Srivastava (MSR India)
cover times, blanket times, and majorizing measures Jian Ding U. C. Berkeley James R. Lee University of Washington Yuval Peres Microsoft Research TexPoint.
Eigenvectors of random graphs: nodal domains James R. Lee University of Washington Yael Dekel and Nati Linial Hebrew University TexPoint fonts used in.
Discrepancy Minimization by Walking on the Edges Raghu Meka (IAS & DIMACS) Shachar Lovett (IAS)
Beating the Union Bound by Geometric Techniques Raghu Meka (IAS & DIMACS)
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Graph Sparsifiers Nick Harvey University of British Columbia Based on joint work with Isaac Fung, and independent work of Ramesh Hariharan & Debmalya Panigrahi.
The Best Algorithms are Randomized Algorithms N. Harvey C&O Dept TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AAAA.
Complexity of direct methods n 1/2 n 1/3 2D3D Space (fill): O(n log n)O(n 4/3 ) Time (flops): O(n 3/2 )O(n 2 ) Time and space to solve any problem on any.
An Algorithmic Proof of the Lopsided Lovasz Local Lemma Nick Harvey University of British Columbia Jan Vondrak IBM Almaden TexPoint fonts used in EMF.
Ran El-Yaniv and Dmitry Pechyony Technion – Israel Institute of Technology, Haifa, Israel Transductive Rademacher Complexity and its Applications.
Approximating Submodular Functions Part 2 Nick Harvey University of British Columbia Department of Computer Science July 12 th, 2015 Joint work with Nina.
Submodular Functions Learnability, Structure & Optimization Nick Harvey, UBC CS Maria-Florina Balcan, Georgia Tech.
Expanders via Random Spanning Trees R 許榮財 R 黃佳婷 R 黃怡嘉.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AA A A A A A AAA Fitting a Graph to Vector Data Samuel I. Daitch (Yale)
Graph Sparsifiers Nick Harvey Joint work with Isaac Fung TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
An algorithmic proof of the Lovasz Local Lemma via resampling oracles Jan Vondrak IBM Almaden TexPoint fonts used in EMF. Read the TexPoint manual before.
C&O 355 Mathematical Programming Fall 2010 Lecture 18 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Spectrally Thin Trees Nick Harvey University of British Columbia Joint work with Neil Olver (MIT  Vrije Universiteit) TexPoint fonts used in EMF. Read.
C&O 355 Mathematical Programming Fall 2010 Lecture 16 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Artur Czumaj DIMAP DIMAP (Centre for Discrete Maths and it Applications) Computer Science & Department of Computer Science University of Warwick Testing.
C&O 355 Lecture 24 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
An algorithmic proof of the Lovasz Local Lemma via resampling oracles Jan Vondrak IBM Almaden TexPoint fonts used in EMF. Read the TexPoint manual before.
Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Generating Random Spanning Trees via Fast Matrix Multiplication Keyulu Xu University of British Columbia Joint work with Nick Harvey TexPoint fonts used.
An algorithmic proof of the Lovasz Local Lemma via resampling oracles Jan Vondrak IBM Almaden TexPoint fonts used in EMF. Read the TexPoint manual before.
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Resparsification of Graphs
Dimension reduction for finite trees in L1
Spectral Clustering.
Distributed Submodular Maximization in Massive Datasets
Density Independent Algorithms for Sparsifying
Structural Properties of Low Threshold Rank Graphs
Matrix Martingales in Randomized Numerical Linear Algebra
Sublinear Algorihms for Big Data
On Approximating Covering Integer Programs
On Solving Linear Systems in Sublinear Time
Presentation transcript:

Sparse Approximations Nick Harvey University of British Columbia TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA

Approximating Dense Objects by Sparse Objects Floor joists Wood Joists Engineered Joists

Approximating Dense Objects by Sparse Objects Bridges Masonry Arch Truss Arch

Approximating Dense Objects by Sparse Objects Bones Human Femur Robin Bone

Mathematically Can an object with many pieces be approximately represented by fewer pieces? Independent random sampling usually does well Theme of this talk: When can we beat random sampling? 6 -1 4 5 7 6 -1 5 -3 2 8 1 Dense Matrix Sparse Matrix Dense Graph Sparse Graph

Talk Outline Vignette #1: Discrepancy theory Vignette #2: Singular values and eigenvalues Vignette #3: Graphs Theorem on “Spectrally Thin Trees”

Discrepancy Given vectors v1,…,vn2Rd with kvikp bounded. Want y2{-1,1}n with ki yivikq small. Eg1: If kvik1·1 then E ki yi vik1 · Eg2: If kvik1·1 then 9y s.t. ki yi vik1 · Spencer ‘85: Partial Coloring + Entropy Method Gluskin ‘89: Sidak’s Lemma Giannopoulos ‘97: Partial Coloring + Sidak Bansal ‘10: Brownian Motion + Semidefinite Program Bansal-Spencer ‘11: Brownian Motion + Potential function Lovett-Meka ‘12: Brownian Motion Non-algorithmic Algorithmic

Discrepancy Given vectors v1,…,vn2Rd with kvikp bounded. Want y2{-1,1}n with ki yivikq small. Eg1: If kvik1·1 then E ki yi vik1 · Eg2: If kvik1·1 then 9y s.t. ki yi vik1 · Eg3: If kvik1·¯, kvik1·±, and ki vik1·1, then 9y with ki yi vik1 · Harvey ’13: Using Lovasz Local Lemma. Question: Can log(±/¯2) factor be improved?

Talk Outline Vignette #1: Discrepancy theory Vignette #2: Singular values and eigenvalues Vignette #3: Graphs Theorem on “Spectrally Thin Trees”

Partitioning sums of rank-1 matrices Let v1,…,vn2Rd satisfy i viviT=I and kvik2·±. Want y2{-1,1}n with ki yiviviTk2 small. Random sampling: E ki yiviviTk2 · . Rudelson ’96: Proofs using majorizing measures, then nc-Khintchine Marcus-Spielman-Srivastava ’13: 9y2{-1,1}n with ki yiviviTk2 · . 2

Partitioning sums of matrices Given dxd symmetric matrices M1,…,Mn2Rd with i Mi=I and kMik2·±. Want y2{-1,1}n with ki yiMik2 small. Random sampling: E ki yiMik2 · Also follows from nc-Khintchine. Ahlswede-Winter ’02: Using matrix moment generating function. Tropp ‘12: Using matrix cumulant generating function.

Partitioning sums of matrices Given dxd symmetric matrices M1,…,Mn2Rd with i Mi=I and kMik2·±. Want y2{-1,1}n with ki yiMik2 small. Random sampling: E ki yiMik2 · Question: 9y2{-1,1}n with ki yiMik2 · ? Conjecture: Suppose i Mi=I and kMikSch-1·±. 9y2{-1,1}n with ki yiMik2 · ? MSS ’13: Rank-one case is true Harvey ’13: Diagonal case is true (ignoring log(¢) factor) False!

Partitioning sums of matrices Given dxd symmetric matrices M1,…,Mn2Rd with i Mi=I and kMik2·±. Want y2{-1,1}n with ki yiMik2 small. Random sampling: E ki yiMik2 · Question: Suppose only that kMik2·1. 9y2{-1,1}n with ki yiMik2 · ? Spencer/Gluskin: Diagonal case is true

Column-subset selection Given vectors v1,…,vn2Rd with kvik2=1. Let st.rank=n/ki viviTk2. Let . 9y2{0,1}n s.t. i yi=k and (1-²)2 · ¸k( i yiviviT ). Spielman-Srivastava ’09: Potential function argument Youssef ’12: Let . 9y2{0,1}n s.t. i yi=k, (1-²)2 · ¸k( i yiviviT ) and ¸1( i yiviviT ) · (1+²)2.

Column-subset selection up to the stable rank Given vectors v1,…,vn2Rd with kvik2=1. Let st.rank=n/ki viviTk2. Let . For y2{0,1}n s.t. i yi=k, can we control ¸k( i yiviviT ) and ¸1( i yiviviT ) ? ¸k can be very small, say O(1/d). Rudelson’s theorem: can get ¸1 · O(log d) and ¸k>0. Harvey-Olver ’13: ¸1 · O(log d / log log d) and ¸k>0. MSS ‘13: If i viviT =I, can get ¸1 · O(1) and ¸k>0.

Talk Outline Vignette #1: Discrepancy theory Vignette #2: Singular values and eigenvalues Vignette #3: Graphs Theorem on “Spectrally Thin Trees”

weighted degree of node c Graph Laplacian 5 10 Graph with weights u: a c d 2 1 b a b c d Laplacian Matrix: a 7 -2 -5 3 -1 16 -10 10 negative of u(ac) b Lu = D-A = c weighted degree of node c d Effective Resistance from s to t: voltage difference when each edge e is a (1/ue)-ohm resistor and a 1-amp current source placed between s and t = (es-et)T Luy (es-et) Effective Conductance: cst = 1 / (effective resistance from s to t)

Spectral approximation of graphs Edge weights u Edge weights w 5 -1 4 6 7 6 -1 -5 5 -3 2 8 -8 1 10 Lu = Lw = ®-spectral sparsifier: Lu ¹ Lw ¹ ®¢Lu

Ramanujan Graphs Suppose Lu is complete graph on n vertices (ue=1 8e). Lubotzky-Phillips-Sarnak ’86: For infinitely many d and n, 9w2{0,1}E such that e we=dn/2 (actually Lw is d-regular) and MSS ‘13: Holds for all d¸3, and all n=c¢2k. Friedman ‘04: If Lw is a random d-regular graph, then 8²>0 with high probability.

Arbitrary graphs Spielman-Srivastava ’08: For any graph Lu with n=|V|, 9w2RE such that |support(w)| = O(n log(n)/²2) and Proof: Follows from Rudelson’s theorem MSS ’13: For any graph Lu with n=|V|, 9w2RE such that we 2 £(²2) ¢ N ¢ (effective conductance of e) |support(w)| = O(n/²2) and

Spectrally-thin trees Question: Let G be an unweighted graph with n vertices. Let C = mine (effective conductance of edge e). Want a subtree T of G with . Equivalent to Goddyn’s Conjecture ‘85: There is a subtree T with Relates to conjectures of Tutte (‘54) on nowhere-zero flows, and to approximations of the traveling salesman problem.

Spectrally-thin trees Question: Let G be an unweighted graph with n vertices. Let C = mine (effective conductance of edge e). Want a subtree T of G with . Rudelson’s theorem: Easily gives ® = O(log n). Harvey-Olver ‘13: ® = O(log n / log log n). Moreover, there is an efficient algorithm to find such a tree. MSS ’13: ® = O(1), but not algorithmic.

Talk Outline Vignette #1: Discrepancy theory Vignette #2: Singular values and eigenvalues Vignette #3: Graphs Theorem on “Spectrally Thin Trees”

Spectrally Thin Trees Proof overview: Given an (unweighted) graph G with eff. conductances ¸ C. Can find an unweighted tree T with Proof overview: Show independent sampling gives spectral thinness, but not a tree. ► Sample every edge e independently with prob. xe=1/ce Show dependent sampling gives a tree, and spectral thinness still works.

Matrix Concentration Theorem: [Tropp ‘12] Let Y1,…,Ym be independent, PSD matrices of size nxn. Let Y=i Yi and Z=E [ Y ]. Suppose Yi ¹ R¢Z a.s. Then

Independent sampling Let Suppose Mi ¹ Z. Define sampling probabilities xe = 1/ce. It is known that e xe = n–1. Claim: Independent sampling gives T µ E with E [|T|]=n–1 and Theorem [Tropp ‘12]: Let M1,…,Mm be nxn PSD matrices. Let D(x) be a product distribution on {0,1}m with marginals x. Let Suppose Mi ¹ Z. Then Define Me = ce¢Le. Then Z = LG and Me ¹ Z holds. Setting ®=6 log n / log log n, we get whp. But T is not a tree! Yi = Xi Mi Y = \sum_i Yi Z = E[Y] = E[\sum_i Yi] = \sum_i E[Xi Mi] = \sum_i xi Mi. Assume that Yi <= R*Z as, which is equivalent to Mi <= R*Z a.s. It turns out that R = 1. Laplacian of the single edge e Properties of conductances used

Spectrally Thin Trees Proof overview: Given an (unweighted) graph G with eff. conductances ¸ C. Can find an unweighted tree T with Proof overview: Show independent sampling gives spectral thinness, but not a tree. ► Sample every edge e independently with prob. xe=1/ce Show dependent sampling gives a tree, and spectral thinness still works. ► Run pipage rounding to get tree T with Pr[ e2T ] = xe = 1/ce

Pipage rounding [Ageev-Svirideno ‘04, Srinivasan ‘01, Calinescu et al. ‘07, Chekuri et al. ‘09] Let P be any matroid polytope. E.g., convex hull of characteristic vectors of spanning trees. Given fractional x Find coordinates a and b s.t. line z  x + z ( ea – eb ) stays in current face Find two points where line leaves P Randomly choose one of those points s.t. expectation is x Repeat until x = ÂT is integral x is a martingale: expectation of final ÂT is original fractional x. ÂT1 ÂT6 ÂT2 x ÂT3 ÂT5 ÂT4

Pipage rounding and concavity Say f : Rm ! R is concave under swaps if z ! f( x + z(ea-eb) ) is concave 8x2P, 8a, b2[m]. Let X0 be initial point and ÂT be final point visited by pipage rounding. Claim: If f concave under swaps then E[f(ÂT)] · f(X0). [Jensen] Let E µ {0,1}m be an event. Let g : [0,1]m ! R be a pessimistic estimator for E, i.e., Claim: Suppose g is concave under swaps. Then Pr[ ÂT 2 E ] · g(X0).

Chernoff Bound Chernoff Bound: Fix any w, x 2 [0,1]m and let ¹ = wTx. Define . Then, Claim: gt,µ is concave under swaps. [Elementary calculus] Let X0 be initial point and ÂT be final point visited by pipage rounding. Let ¹ = wTX0. Then Bound achieved by independent sampling also achieved by pipage rounding

Matrix Pessimistic Estimators Theorem [Tropp ‘12]: Let M1,…,Mm be nxn PSD matrices. Let D(x) be a product distribution on {0,1}m with marginals x. Let Suppose Mi ¹ Z. Let Then and . Pessimistic estimator Main Theorem: gt,µ is concave under swaps. Bound achieved by independent sampling also achieved by pipage rounding

Spectrally Thin Trees Proof overview: Given an (unweighted) graph G with eff. conductances ¸ C. Can find an unweighted tree T with Proof overview: Show independent sampling gives spectral thinness, but not a tree. ► Sample every edge e independently with prob. xe=1/ce Show dependent sampling gives a tree, and spectral thinness still works. ► Run pipage rounding to get tree T with Pr[ e2T ] = xe = 1/ce

Matrix Analysis Matrix concentration inequalities are usually proven via sophisticated inequalities in matrix analysis Rudelson: non-commutative Khinchine inequality Ahlswede-Winter: Golden-Thompson inequality if A, B symmetric, then tr(eA+B) · tr(eA eB). Tropp: Lieb’s concavity inequality [1973] if A, B Hermitian and C is PD, then z ! tr exp( A + log(C+zB) ) is concave. Key technical result: new variant of Lieb’s theorem if A Hermitian, B1, B2 are PSD, and C1, C2 are PD, then z ! tr exp( A + log(C1+zB1) + log(C2–zB2) ) is concave.

Questions Can Spencer/Gluskin theorem be extended to matrices? Can MSS’13 be made algorithmic? Can MSS’13 be extended to large-rank matrices? O(1)-spectrally thin trees exist. Can one be found algorithmically? Are O(1)-spectrally thin trees helpful for Goddyn’s conjecture?