Embedding Metrics into Geometric Spaces

Slides:



Advertisements
Similar presentations
Geometry and Expansion: A survey of some results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh Vazirani, STOC04; S. A., Elad Hazan,
Advertisements

Vertex sparsifiers: New results from old techniques (and some open questions) Robert Krauthgamer (Weizmann Institute) Joint work with Matthias Englert,
TexPoint fonts used in EMF.
Graph Partitioning Problems Lecture 18: March 14 s1 s3 s4 s2 T1 T4 T2 T3 s1 s4 s2 s3 t3 t1 t2 t4 A region R1 R2 C1 C2.
Poly-Logarithmic Approximation for EDP with Congestion 2
Approximation Algoirthms: Graph Partitioning Problems Lecture 17: March 16 s1 s3 s4 s2 T1 T4 T2 T3.
Trees and Markov convexity James R. Lee Institute for Advanced Study [ with Assaf Naor and Yuval Peres ] RdRd x y.
Metric Embeddings with Relaxed Guarantees Hubert Chan Joint work with Kedar Dhamdhere, Anupam Gupta, Jon Kleinberg, Aleksandrs Slivkins.
Metric embeddings, graph expansion, and high-dimensional convex geometry James R. Lee Institute for Advanced Study.
Geometric embeddings and graph expansion James R. Lee Institute for Advanced Study (Princeton) University of Washington (Seattle)
Embedding Metrics into Ultrametrics and Graphs into Spanning Trees with Constant Average Distortion Ittai Abraham, Yair Bartal, Ofer Neiman The Hebrew.
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
All Rights Reserved © Alcatel-Lucent 2006, ##### Matthew Andrews, Alcatel-Lucent Bell Labs Princeton Approximation Workshop June 15, 2011 Edge-Disjoint.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
Sparsest Cut S S  G) = min |E(S, S)| |S| S µ V G = (V, E) c- balanced separator  G) = min |E(S, S)| |S| S µ V c |S| ¸ c ¢ |V| Both NP-hard.
Semidefinite Programming
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Network Design Adam Meyerson Carnegie-Mellon University.
Expander flows, geometric embeddings, and graph partitioning Sanjeev Arora Princeton Satish Rao UC Berkeley Umesh Vazirani UC Berkeley.
Traveling with a Pez Dispenser (Or, Routing Issues in MPLS) Anupam Gupta Amit Kumar FOCS 2001 Rajeev Rastogi Iris Reinbacher COMP670P
Chapter 9 Graph algorithms Lec 21 Dec 1, Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
An Approximation Algorithm for Requirement cut on graphs Viswanath Nagarajan Joint work with R. Ravi.
SDP Based Approach for Graph Partitioning and Embedding Negative Type Metrics into L 1 Subhash Khot (Georgia Tech) Nisheeth K. Vishnoi (IBM Research and.
Introduction Outline The Problem Domain Network Design Spanning Trees Steiner Trees Triangulation Technique Spanners Spanners Application Simple Greedy.
Volume distortion for subsets of R n James R. Lee Institute for Advanced Study & University of Washington Symposium on Computational Geometry, 2006; Sedona,
Integrality Gaps for Sparsest Cut and Minimum Linear Arrangement Problems Nikhil R. Devanur Subhash A. Khot Rishi Saket Nisheeth K. Vishnoi.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Chapter 6: Geometric Analysis: The Gap Property By azam sadeghian 1.
13 th Nov Geometry of Graphs and It’s Applications Suijt P Gujar. Topics in Approximation Algorithms Instructor : T Kavitha.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Embeddings, flow, and cuts: an introduction University of Washington James R. Lee.
Doubling Dimension: a short survey Anupam Gupta Carnegie Mellon University Barriers in Computational Complexity II, CCI, Princeton.
Graph Partitioning using Single Commodity Flows
Lower Bounds for Embedding Edit Distance into Normed Spaces A. Andoni, M. Deza, A. Gupta, P. Indyk, S. Raskhodnikova.
Approximation algorithms
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Coarse Differentiation and Planar Multiflows
Design and Analysis of Approximation Algorithms
Dimension reduction for finite trees in L1
Optimization problems such as
Generalized Sparsest Cut and Embeddings of Negative-Type Metrics
Computing Connected Components on Parallel Computers
Bo-Young Kim Applied Algorithm Lab, KAIST 月
Maximum Matching in the Online Batch-Arrival Model
Haim Kaplan and Uri Zwick
Ultra-low-dimensional embeddings of doubling metrics
Great Theoretical Ideas in Computer Science
Algorithms and networks
CS4234 Optimiz(s)ation Algorithms
Approximating k-route cuts
Discrete Mathematics for Computer Science
Enumerating Distances Using Spanners of Bounded Degree
Lecture 16: Earth-Mover Distance
Randomized Algorithms CS648
Graph Partitioning Problems
Algorithms (2IL15) – Lecture 5 SINGLE-SOURCE SHORTEST PATHS
Yair Bartal Lee-Ad Gottlieb Hebrew U. Ariel University
cse 521: design and analysis of algorithms
Linear Programming and Approximation
Advanced Algorithms Analysis and Design
Metric Methods and Approximation Algorithms
Dimension versus Distortion a.k.a. Euclidean Dimension Reduction
Lecture 15: Least Square Regression Metric Embeddings
The Intrinsic Dimension of Metric Spaces
the k-cut problem better approximate and exact algorithms
Clustering.
PRESENTED BY Dr.U.KARUPPIAH DEPARTMENT OF MATHEMATICS
Chapter 9 Graph algorithms
Presentation transcript:

Embedding Metrics into Geometric Spaces Anupam Gupta Carnegie Mellon University 3ème cycle romand de Recherche Opérationnelle, 2010 Seminar, Lecture #2

y z x Metric space M = (V, d) set V of points symmetric non-negative distances d(x,y) triangle inequality d(x,y) ≤ d(x,z) + d(z,y) d(x,x) = 0 y z x

in the previous lecture… We saw: embeddings into distributions over trees ¯-padded decompositions Today: padded decompositions and approximation algorithms embeddings into geometric spaces (ℓp spaces) in particular, ℓ1 and ℓ2 (Euclidean) space

Padded decompositions and approximations

recall from last lecture A metric (V,d) admits ¯-padded decompositions, if for every ¢, we can output a random partition V = V1 ] V2 ] … ] Vk each Vj has diameter ≤ ¢ Pr[ x and y in different clusters ] ≤ Theorem: Every n-point metric admits O(log n)-padded decompositions.

multi-cut Given graph G = (V,E) with k source-sink pairs Find the fewest edges to delete to separate all source-sink pairs NP-hard, APX-hard for k ≥ 3. Best known: O(log k) approximation [Garg Vazirani Yannakakis]

relaxation of multi-cut Suppose we assign lengths to edges such that shortest-path-distance(sp, tp) ≥ 1 for all p. One possible setting: length of cut edges in OPT = 1, all others = 0 total length = OPT. So, find (fractional) setting that minimizes total length  at most OPT. and can be found by linear programming.

algorithm idea Given such fractional edge-lengths (with total length L ≤ OPT) Use these lengths to figure out which edges to cut and E[ number of edges cut ] ≤ O(log n) × L  we’d have a logarithmic approximation !

randomized algorithm for multi-cut Given lengths on edges shortest-path-distance(sp, tp) ≥ 1 for all p. Take a O(log n)-padded decomposition of this metric with ¢ = 0.99 Facts: Each terminal pair separated. Pr[ edge e cut ] ≤ length(e) × O(log n)

Embeddings into ℓp spaces

ℓp spaces Consider real space Rm with the ℓp metrics: for x, y in Rm, and 1 ≤ p < 1 |x – y|p |x – y|1 Since we will deal with finite (n-point) metrics we can (and will) give embeddings into finite dimensions. In this lecture, will ignore the dimensionality, just say ℓp.

inter-relationships For 1 · p · q · 2: ℓq embeds into ℓp (Next lecture, we’ll see some ideas behind ℓ2 into ℓ1.) For 2 · q · 1: ℓ2 embeds into ℓq Everything embeds into ℓ1

Focus on ℓ1: the “Manhattan” or “taxicab” metric ℓ2: Euclidean space And introducing--- ℓ2-square: the square of the Euclidean distance. (does not satisfy triangle inequality) Henceforth, consider only metrics in ℓ2-square

recall: distortion Let M = (V,d) Let M’ = (V’,d’) Let f: V  V’ stretch(f) = maxx, y in V d’(f(x), f(y)) d(x, y) d’(f(x), f(y)) d(x, y) contraction(f) = maxx, y in V distortion(f) = stretch(f) × contraction(f) M is “close” to M’ if there exists a “low”-distortion map f: M  M’

Some jargon We say that a metric (V,d) “embeds in ℓp” or “is in ℓp” if it embeds isometrically (i.e., with distortion 1) into ℓp Hence, we are using ℓp to denote both the metric space, and also the class of metrics that embed into ℓp. (I’ll be careful when it’s ambiguous.)

Is every metric in ℓ2?

Is every metric in ℓ1?

central question today Given a metric (V,d) how well does it embed into ℓp spaces?

general n-point metrics Upper bounds: Lower bounds:

tree metrics (the simplest class of metrics)

planar metrics (very common subclass of metrics)

algorithmic question These were all “uniform” results. What about the algorithmic question: Given a metric (X,d), what is the smallest distortion D possible for embedding this metric? ℓ2 : use semi-definite programming ℓ1 : NP-hard open question: o((log n)½) approximation for this problem.

today’s menu padded decompositions and multicut the sparsest cut and ℓ1 embeddings general metric  ℓ1 with distortion O(log n) exists a metric  ℓ2 requires distortion (log n)

finding balanced separators Given an edge-weighted graph G, divide the vertex-set into two parts such that 1) each part contains ≈ half the vertices 2) weight of edges between two parts is small. Useful for divide-and-conquer algorithms.

the sparsest cut problem Given an edge-weighted graph G, divide the vertex-set into two parts (S, V\S) such that Weight of edges between S and V\S is minimized |S| × |V\S|

the sparsest cut problem Given an edge-weighted graph G, divide the vertex-set into two parts (S, V\S) such that Weight of edges between S and V\S is minimized |S| × |V\S| Theorem: If every n-point metric embeds into ℓ1 with distortion ®,  ®-approximation algorithm for sparsest cut using LPs. Theorem: every n-point metric in ℓ2-squared embeds into ℓ1 with distortion ®,  ®-approximation algorithm for sparsest cut using SDPs.

the general idea Weight of edges between S and V\S min cut S |S| × |V\S| Define xuv = 1 if exactly one of u,v lie in S, = 0 otherwise Note that x is a metric  uv in E xuv min cut-based x NP-hard  uv in V × V xuv  uv in E xuv can be computed by an linear prog. min metric x  uv in V × V xuv

the general idea  uv in E xuv min cut-based x  uv in V × V xuv NP-hard  uv in V × V xuv  uv in E xuv = min x in ℓ1 [Avis Deza 89]  uv in V × V xuv Theorem: If every n-point metric embeds into ℓ1 with distortion ®,  ®-approximation algorithm for sparsest cut using LPs.  uv in E xuv can be computed by an linear prog. min metric x  uv in V × V xuv

relaxations and LP rounding

today’s menu padded decompositions and multicut the sparsest cut and ℓ1 embeddings general metric  ℓ1 with distortion O(log n) exists a metric  ℓ2 requires distortion (log n)

trees into ℓ1 Use a new dimension for each edge.

ℓ1 forms a convex cone

now use FRT…

a different way If the metric admits a ¯-padded decomposition easier result [Rao 99] an O(¯ sqrt{log n})-distortion embedding into ℓ2. more involved [KLMN 03] an O(sqrt{¯ log n})-distortion embedding into ℓ2.

today’s menu padded decompositions and multicut the sparsest cut and ℓ1 embeddings general metric  ℓ1 with distortion O(log n) exists a metric  ℓ2 requires distortion (log n)

a lower bound Big picture: suppose we have non-negative values aij and bij For a metric d, define R(d) = we will show metrics d for which we can set these a’s, b’s such that R(d) is 1/(n log2 n) but R(d’) for any d’ in ℓ2 is at least 1/n. Hence must need (log n) distortion for this metric d  ℓ2.  aij dij2  bij dij2

to start off, some background (edge)-expander graphs. logarithmic diameter most points are log distance apart spectral properties of expander graphs

expander graphs °-expanders: Graphs such that for any node subset S with |S|≤ |V|/2, | edges from S to V \ S | > ° |S| ° is called the “(edge)-expansion” of G. Interesting when degree(G), ° are both constant indep’t of G. Theorem: A random 3-regular graph is a (1)-expander whp. Explicit constructions of O(1)-degree, (1)-expanders known

expander graphs facts(1) Fact #1: Any pair of nodes is at most O(log n) apart.

expander graphs facts(2) Fact #2: Most pairs (x,y) of nodes are (log n) apart.

Metric d for which R(d) is small aij = 1 if edge (i,j) present in expander, 0 otherwise bij = 1 dij shortest-path distance between i&j in expander. Fact #3: R(d) = 1/(n log2 n)  aij dij2  bij dij2

R(d’) is large for Euclidean metrics Fact #4: R(d’) = (1/n) for every Euclidean metric d’

R(d’) is large for Euclidean metrics Fact #4: R(d’) = (1/n) for every Euclidean metric d’

today’s menu the sparsest cut and ℓ1 embeddings general metric  ℓ1 with distortion O(log n) general metric  ℓ2 with distortion O(log3/2 n) exists a metric  ℓ2 requires distortion (log n) Finally, some (newer) extensions of these ideas…

better sparsest cut via SDPs Theorem: If every n-point metric embeds into ℓ1 with distortion ®,  ®-approximation algorithm for sparsest cut using LPs. Theorem: every n-point metric in ℓ2-squared embeds into ℓ1 with distortion ®,  ®-approximation algorithm for sparsest cut using SDPs.

the general idea  uv in E xuv min cut-based x  uv in V × V xuv NP-hard  uv in V × V xuv  uv in E xuv = min x in ℓ1 [Avis Deza 89]  uv in V × V xuv Theorem: every n-point metric in ℓ2-squared embeds into ℓ1 with distortion ®,  ®-approximation algorithm for sparsest cut using SDPs.  uv in E xuv can be computed by an linear prog. min metric x  uv in V × V xuv

thank you!