Trees and Markov convexity James R. Lee Institute for Advanced Study [ with Assaf Naor and Yuval Peres ] RdRd x y.

Slides:



Advertisements
Similar presentations
Routing Complexity of Faulty Networks Omer Angel Itai Benjamini Eran Ofek Udi Wieder The Weizmann Institute of Science.
Advertisements

Lower Bounds for Additive Spanners, Emulators, and More David P. Woodruff MIT and Tsinghua University To appear in FOCS, 2006.
TexPoint fonts used in EMF.
Triangle partition problem Jian Li Sep,2005.  Proposed by Redstar in Algorithm board in Fudan BBS.  Motivated by some network design strategy.
Dynamic Planar Convex Hull Operations in Near- Logarithmic Amortized Time TIMOTHY M. CHAN.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Metric Embeddings with Relaxed Guarantees Hubert Chan Joint work with Kedar Dhamdhere, Anupam Gupta, Jon Kleinberg, Aleksandrs Slivkins.
Metric Embedding with Relaxed Guarantees Ofer Neiman Ittai Abraham Yair Bartal.
Cse 521: design and analysis of algorithms Time & place T, Th pm in CSE 203 People Prof: James Lee TA: Thach Nguyen Book.
Metric embeddings, graph expansion, and high-dimensional convex geometry James R. Lee Institute for Advanced Study.
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
Geometric embeddings and graph expansion James R. Lee Institute for Advanced Study (Princeton) University of Washington (Seattle)
Embedding Metrics into Ultrametrics and Graphs into Spanning Trees with Constant Average Distortion Ittai Abraham, Yair Bartal, Ofer Neiman The Hebrew.
Distance Scales, Embeddings, and Metrics of Negative Type By James R. Lee Presented by Andy Drucker Mar. 8, 2007 CSE 254: Metric Embeddings.
Navigating Nets: Simple algorithms for proximity search Robert Krauthgamer (IBM Almaden) Joint work with James R. Lee (UC Berkeley)
1 List Coloring and Euclidean Ramsey Theory TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A Noga Alon, Tel Aviv.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
1 Numerical geometry of non-rigid shapes Consistent approximation of geodesics in graphs Consistent approximation of geodesics in graphs Tutorial 3 © Alexander.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Advances in Metric Embedding Theory Ofer Neiman Ittai Abraham Yair Bartal Hebrew University.
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Lower Bounds on the Distortion of Embedding Finite Metric Spaces in Graphs Y. Rabinovich R. Raz DCG 19 (1998) Iris Reinbacher COMP 670P
1 On the Benefits of Adaptivity in Property Testing of Dense Graphs Joint work with Mira Gonen Dana Ron Tel-Aviv University.
Introduction Outline The Problem Domain Network Design Spanning Trees Steiner Trees Triangulation Technique Spanners Spanners Application Simple Greedy.
A General Approach to Online Network Optimization Problems Seffi Naor Computer Science Dept. Technion Haifa, Israel Joint work: Noga Alon, Yossi Azar,
cover times, blanket times, and majorizing measures Jian Ding U. C. Berkeley James R. Lee University of Washington Yuval Peres Microsoft Research TexPoint.
Distance scales, embeddings, and efficient relaxations of the cut cone James R. Lee University of California, Berkeley.
Outline Introduction The hardness result The approximation algorithm.
Volume distortion for subsets of R n James R. Lee Institute for Advanced Study & University of Washington Symposium on Computational Geometry, 2006; Sedona,
Algorithms on negatively curved spaces James R. Lee University of Washington Robert Krauthgamer IBM Research (Almaden) TexPoint fonts used in EMF. Read.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Chapter 6: Geometric Analysis: The Gap Property By azam sadeghian 1.
Discrete Structures Lecture 12: Trees Ji Yanyan United International College Thanks to Professor Michael Hvidsten.
13 th Nov Geometry of Graphs and It’s Applications Suijt P Gujar. Topics in Approximation Algorithms Instructor : T Kavitha.
Monochromatic Boxes in Colored Grids Joshua Cooper, USC Math Steven Fenner, USC CS Semmy Purewal, College of Charleston Math.
Lines in the plane, slopes, and Euler’s formula by Tal Harel
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Cover times, blanket times, and the GFF Jian Ding Berkeley-Stanford-Chicago James R. Lee University of Washington Yuval Peres Microsoft Research.
15-853Page :Algorithms in the Real World Planar Separators I & II – Definitions – Separators of Trees – Planar Separator Theorem.
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
ICS 253: Discrete Structures I Induction and Recursion King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Embeddings, flow, and cuts: an introduction University of Washington James R. Lee.
Topics in Algorithms 2007 Ramesh Hariharan. Tree Embeddings.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
Doubling Dimension: a short survey Anupam Gupta Carnegie Mellon University Barriers in Computational Complexity II, CCI, Princeton.
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Lower Bounds for Embedding Edit Distance into Normed Spaces A. Andoni, M. Deza, A. Gupta, P. Indyk, S. Raskhodnikova.
What is a metric embedding?Embedding ultrametrics into R d An embedding of an input metric space into a host metric space is a mapping that sends each.
CS Lecture 26 Monochrome Despite Himself. Pigeonhole Principle: If we put n+1 pigeons into n holes, some hole must receive at least 2 pigeons.
COMPSCI 102 Introduction to Discrete Mathematics.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
1 Approximation Algorithms for Low- Distortion Embeddings into Low- Dimensional Spaces Badoiu et al. (SODA 2005) Presented by: Ethan Phelps-Goodman Atri.
Approximation algorithms
Coarse Differentiation and Planar Multiflows
Dimension reduction for finite trees in L1
Reconstruction on trees and Phylogeny 1
Additive Combinatorics and its Applications in Theoretical CS
Lecture 16: Earth-Mover Distance
cse 521: design and analysis of algorithms
Dimension versus Distortion a.k.a. Euclidean Dimension Reduction
Embedding Metrics into Geometric Spaces
Discrete Mathematics for Computer Science
Clustering.
Winter 2019 Lecture 11 Minimum Spanning Trees (Part II)
Switching Lemmas and Proof Complexity
Hierarchical Routing in Networks with Bounded Doubling Dimension
Autumn 2019 Lecture 11 Minimum Spanning Trees (Part II)
Presentation transcript:

Trees and Markov convexity James R. Lee Institute for Advanced Study [ with Assaf Naor and Yuval Peres ] RdRd x y

Distortion: Smallest number C ¸ 1 such that: the Euclidean distortion problem Given a metric space (X,d), determine how well X embeds into a Euclidean space. Why study this kind of geometry (in CS)? - Applicability of low-distortion Euclidean embeddings - Understanding semi-definite programs - Optimization, harmonic analysis, hardness of approximation, cuts and flows, Markov chains, expansion, randomness… Euclidean embedding: An injective map f : X ! R k (or L 2 )

Distortion: Smallest number C ¸ 1 such that: the Euclidean distortion problem Given a metric space (X,d), determine how well X embeds into a Euclidean space. Euclidean embedding: An injective map f : X ! R k (or L 2 ) Actually, then the distortion is A ¢ B.

the problem for trees One of the simplest families of metric spaces are the tree metrics. graph-theoretic tree T = (V,E) + edge lengths len : E ! R + len(e) x y d(x,y) = length of shortest geodesic

the problem for trees [Bourgain 86]: The complete binary tree B k of height k has Euclidean distortion [Matousek 99]: Every n-point tree metric embeds with distortion at most [Gupta-Krauthgamer-L 03]: A tree metric T embeds with constant distortion into a finite-dimensional Euclidean space if and only if T is doubling. When does does a tree embed into some Euclidean space (arbitrary dimension) with bounded distortion?

the problem for trees [Bourgain 86]: The complete binary tree B k of height k has Euclidean distortion [Matousek 99]: Every n-point tree metric embeds with distortion at most [Gupta-Krauthgamer-L 03]: A tree metric T embeds with constant distortion into a finite-dimensional Euclidean space if and only if T is doubling. e1e1 e2e2 e3e3

why don’t trees embed in Hilbert space? O NE A NSWER : (EQUILATERAL) FORKS If both of these paths of length 2 are embedded isometrically in a Euclidean space, then A and B must conincide! A B Quantitative version holds: If both 2-paths are embedded with distortion 1 + , then

uniform convexity A B W Z paralellogram identity: for any pair of vectors a,b 2 R 2, f(W)=0 a=f(A) b=f(B) 4 ± O(  ) O(  )

forks in complete binary trees RdRd Ramsey style proof: If B k is embedded into L 2 with distortion then there exists some almost-isometric fork. [Matousek] C ONTRADICTION!

on forks Natural question: Are forks the only obstruction? The problem isn’t forking; it’s forking, and forking, and forking… T HEOREM: For a tree metric T, the following conditions are equivalent. -- T embeds in a Euclidean space with bounded distortion -- The family of complete binary trees {B k } do not embed into T with bounded distortion. In other words, a tree embeds into a Euclidean space if and only if it does not “contain” arbitrarily large binary trees!

quantitative version T HEOREM: Let c 2 (T) be a tree’s Euclidean distortion, then (up to constants), D EFINITION: For a metric space (X,d), i.e. the height of the largest complete binary tree that embeds into T with distortion at most 2. Let’s prove this…

monotone edge colorings If T=(V,E) is a (rooted) tree, then an edge-coloring of T is a map The coloring is monotone if every color class is a monontone path in T (monotone path = continguous subset of root-leaf path) A coloring  is  -good if, for every u,v 2 T, at least an  -fraction of the u-v path is monochromatic. u v

monotone edge colorings If T=(V,E) is a (rooted) tree, then an edge-coloring of T is a map The coloring is monotone if every color class is a monontone path in T (monotone path = continguous subset of root-leaf path) A coloring  is  -good if, for every u,v 2 T, at least an  -fraction of the u-v path is monochromatic. u v

good colorings ) good embeddings We associate to every color class j 2 {1, 2, …, C}, a unit vector  j 2 R C. Given a vertex x whose path from the root uses edges e 1, e 2, …, e k, we define our embedding f : T ! R C by e1e1 e2e2 e3e3 e4e4 x f(x) = [len(e 1 )+len(e 2 )]  1 + len(e 3 )  2 + len(e 4 )  3

good colorings ) good embeddings We associate to every color class j 2 {1, 2, …, C}, a unit vector  j 2 R C. Given a vertex x whose path from the root uses edges e 1, e 2, …, e k, we define our embedding f : T ! R C by e1e1 e2e2 e3e3 e4e4 x Claim: f is non-expansive, i.e. (triangle inequality)

good colorings ) good embeddings We associate to every color class j 2 {1, 2, …, C}, a unit vector  j 2 R C. Given a vertex x whose path from the root uses edges e 1, e 2, …, e k, we define our embedding f : T ! R C by e1e1 e2e2 e3e3 e4e4 x Claim: For every x,y 2 T, lca(x,y) y x Monotonicity ) disjoint colors lca(x,y) x

good colorings ) good embeddings L EMMA: If T admits an  -good coloring, then the Euclidean distortion of T is at most 2/ . The hard part comes next… T HEOREM: If  * is the biggest  for which T admits an  -good coloring, then

good colorings ) good embeddings L EMMA: If T admits an  -good coloring, then the Euclidean distortion of T is at most 2/ . The hard part comes next… T HEOREM: If  * is the biggest  for which T admits an  -good coloring, then

good colorings ) good embeddings L EMMA: If T admits an  -good coloring, then the Euclidean distortion of T is at most 2/ . The hard part comes next… T HEOREM: If  * is the biggest  for which T admits an  -good coloring, then C OROLLARY: [ stronger embedding technique gives ]

good colorings ) good embeddings L EMMA: If T admits an  -good coloring, then the Euclidean distortion of T is at most 2/ . The hard part comes next… T HEOREM: If  * is the biggest  for which T admits an  -good coloring, then Proof outline:1. Give some procedure for coloring the edges of T. 2. If the procedure fails to construct an  -good coloring, find a complete binary tree of height O( 1 /  ) embedded inside T.

constructing a good coloring First, we define a family of trees {M k }: These are just {B k } with an extra “incoming” edge… M k = BkBk M0M0 M1M1 M2M2 Given a tree T, we say that T admits a copy of M k at scale j if… 1. M k embeds into T with distortion at most The root of M k maps to the root of T. 3. The edges of M k have length ¼ 4 j.

constructing a good coloring Now, suppose we have a “scale selector” function g : T ! Z which assigns a “scale” to every vertex in T. We produce a coloring as follows… T1T1 T2T2 T3T3 T4T4 v How to continue a coloring: Continue toward the T i which admits the largest copy of M k at scale g(v)… (break ties arbitrarily) j = g(v) 4j4j

constructing a good coloring Suppose we failed to produce an  -good coloring… u v D ·  D [ assume ¼  D ¼ 4 j ] Assume that g(w) = j for every breakpoint w on the u-v path. w In this manner, we construct a complete binary tree of height ¼ 1 /  inside T. But what about our assumptions on g(w)?

constructing a good coloring Suppose we failed to produce an  -good coloring… u v D Can define g so that every sufficiently dense set of breakpoints contains a large subset with the “right” g-values using hierarchical nets. j j+2 j+1 j+3 Points with g(w) ¸ k form a 4 k -net. At most a ¼ fraction of the 4 k -net points have label higher than k (geometric sum). Now reconstruct a complete binary tree of height  1 /  ) just using the green nodes.

cantor trees So we have these bounds: this upper bound is tight There exists a family of trees {C k } for which [ so the “branching” lower bound only gives ]

cantor trees Spherically symmetric trees (SST): Every path with marked vertices yields a binary SST.

cantor trees The Cantor trees are binary SSTs based on inductively defined paths… P 0 = P2P2 P2P2 P k+1 = PkPk PkPk length 2 k+1 P 1 = P 2 = P 3 = len(P k ) = 2 len(P k-1 ) + 2 k = k ¢ 2 k log log |C k | ~ k br(C k ) ~ k Claim: c 2 (C k ) ~ √k

strong edge colorings A monotone edge coloring  is  -strong if, for every u,v 2 T, at least half of the u-v path is colored by classes of length at least  ¢ d(u,v). T HEOREM: If  * is the biggest  for which T admits a  -strong coloring, then Proof sketch: 1. Show that  -strong colorings yield good embeddings. 2. Give some procedure to construct a monotone coloring. 3. If the coloring fails to be  -strong, show that T must contain a Cantor-like subtree. 4. Show that every Cantor-like subtree requires large distortion to embed in a Euclidean space.

cantor trees The Cantor trees do not have (good) strong colorings… P 0 = P2P2 P2P2 P k+1 = PkPk PkPk length 2 k+1 P 1 = P 2 = P 3 = 1/k 1/2k 1/4k ) best coloring is 2 -k/2 strong!

Markov convexity Idea: Look at Markov chains wandering in a Euclidean space; must satisfy special properties, e.g. symmetric random walk on Z, Z 2, … t=0t=0 t=kt=k

Markov convexity Idea: Look at Markov chains wandering in a Euclidean space; must satisfy special properties, e.g. symmetric random walk on Z, Z 2, … A metric space (M,d) is Markov 2-convex if, for every Markov chain {X t } taking values in M, and every number m 2 N, we have for some constant C ¸ 0.

Markov convexity T HEOREM: Every Euclidean space is Markov 2-convex. (with some universal constant C) A metric space (M,d) is Markov 2-convex if, for every Markov chain {X t } taking values in M, and every number m 2 N, we have for some constant C ¸ 0.

discrepancy with Euclidean space ) distortion ~ √m ~ √ log k lower bounds from Markov convexity If {X t } is the downward random walk on B k, then… 2m2m 1)1) m¢2m)m¢2m) (with the leaves as absorbing states)

lower bounds from Markov convexity P2P2 P2P2 P k+1 = PkPk PkPk length 2 k+1 P 3 = Let {X t } be the downward random walk on C k.

lower bounds from Markov convexity P 3 = Let {X t } be the downward random walk on C k. Key fact: At least a j/k fraction of P k is covered by segments whose length is at most 2 j.

conclusion M AIN T HEOREM: For every tree T, we have and -- Markov convexity is a notion for general metric spaces (X,d). Can we relate non-trivial Markov convexity to the non-containment of arbitrarily large complete binary trees? -- What about other Markov-style lower bounds for Hilbert space? -- Can we use reversible Markov chains to construct NEG metrics? -- Are these techniques useful for studying the bandwidth of trees? Q UESTIONS?