Download presentation

Presentation is loading. Please wait.

Published byRuth Raven Modified about 1 year ago

1
Matroid Bases and Matrix Concentration Nick Harvey University of British Columbia Joint work with Neil Olver (Vrije Universiteit) TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A

2
Scalar concentration inequalities Theorem: [Chernoff / Hoeffding Bound] Let Y 1,…, Y m be independent, non-negative scalar random variables. Let Y = i Y i and ¹ = E [ Y ]. Suppose Y i · 1 a.s. Then

3
Scalar concentration inequalities Theorem: [Panconesi-Srinivasan ‘92, Dubhashi-Ranjan ‘96, etc.] Let Y 1,…, Y m be negatively dependent, non-negative scalar rvs. Let Y = i Y i and ¹ = E [ Y ]. Suppose Y i · 1 a.s. Then Negative cylinder dependence: Y i 2 { 0,1 }, Stronger notions: negative association, determinantal distributions, strongly Rayleigh measures, etc.

4
Matrix concentration inequalities Theorem: [Tropp ‘12, etc.] Let Y 1,…, Y m be independent, PSD matrices of size n x n. Let Y = i Y i and M = E [ Y ]. Suppose ¹ ¢ Y i ¹ M a.s. Then

5
Extensions of Chernoff Bounds IndependentNegative Dependent ScalarsChernoff-HoeffdingPanconesi-Srinivasan, etc. MatricesTropp, etc.? This talk: a special case of the missing common generalization, where the negatively dependent distribution is a certain random walk in a matroid base polytope.

6
Negative Dependence Arises in many natural scenarios. Random spanning trees: Let Y e indicate if edge e is in tree. Knowing that e 2 T decreases probability that f 2 T ef

7
Negative Dependence Arises in many natural scenarios. Random spanning trees: Let Y e indicate if edge e is in tree. Balls and bins: Let Y i be number of balls in bin i. Sampling without replacement, random permutations, random cluster models, etc.

8
Thin trees A spanning tree T is ® -thin if | ± T ( S )| · ® ¢ | ± G ( S )| 8 S Global connectivity: K = min {| ± G ( S )| : ; ( S ( V } Conjecture [Goddyn ’80s]: Every n -vertex graph has an ® -thin tree with ® = O ( 1 / K ). Would have deep consequences in graph theory. S S Cut ± ( S ) = { edge st : s 2 S, t S }

9
Thin trees A spanning tree T is ® -thin if | ± T ( S )| · ® ¢ | ± G ( S )| 8 S Global connectivity: K = min {| ± G ( S )| : ; ( S ( V } Theorem [Asadpour et al ‘10]: Every n -vertex graph has an ® -thin spanning tree with ® =. Uses negative dependence and Chernoff bounds. S S Cut ± ( S ) = { edge st : s 2 S, t S }

10
Asymmetric Traveling Salesman Problem [Julia Robinson, 1949] Let D =( V, E, w ) be a weighted, directed graph. Goal: Find a tour sequence v 1, v 2,…, v k = v 1 of vertices that visits every vertex in V at least once, has v i v i E for every i, and minimizes total weight § 1 · i · k w ( v i v i + 1 ).

11
Let D =( V, E, w ) be a weighted, directed graph. Goal: Find a tour sequence v 1, v 2,…, v k = v 1 of vertices that visits every vertex in V at least once, has v i v i E for every i, and minimizes total weight § 1 · i · k w ( v i v i + 1 ). Reduction [Oveis Gharan, Saberi ‘11]: If you can efficiently find an ® / K -thin spanning tree in any n -vertex graph, then you can find a tour whose weight is within O ( ® ) of optimal. Asymmetric Traveling Salesman Problem [Julia Robinson, 1949]

12
Graph Laplacians L bc = a b c d a b cd a b d c Laplacian of edge bc

13
Graph Laplacians L G = § e 2 E L e = a b c d a b cd degree of node - 1 for every edge a b d c Laplacian of graph G

14
Spectrally-thin trees A spanning tree T is ® -spectrally-thin if L T ¹ ® ¢ L G Effective Resistance from s to t : R st = voltage difference when a 1 -amp current source placed between s and t Theorem [Harvey-Olver '14]: Every n-vertex graph has an ® -spectrally-thin spanning tree with ® =. Uses matrix concentration bounds. Algorithmic

15
Spectrally-thin trees A spanning tree T is ® -spectrally-thin if L T ¹ ® ¢ L G Effective Resistance from s to t : R st = voltage difference when a 1 -amp current source placed between s and t Theorem: Every n-vertex graph has an ® -spectrally-thin spanning tree with ® =. Follows from Kadison-Singer solution of MSS'13. Not algorithmic

16
Asymmetric Traveling Salesman Problem Recent breakthrough: [Ansari, Oveis-Gharan Dec 2014] Show how to build on the O( 1 )-spectrally-thin tree result to approximate optimal weight of an ATSP solution to within poly ( log log n ) of optimal. But, no algorithm to find the actual sequence of vertices!

17
Our Main Result Let P ½ [ 0, 1 ] m be a matroid base polytope (e.g., convex hull of characteristic vectors of spanning trees) Let A 1,…, A m be PSD matrices of size n x n. Define and Q ;. There is an extreme point Â ( S ) of P with

18
Our Main Result Let P ½ [ 0, 1 ] m be a matroid base polytope. Let A 1,…, A m be PSD matrices of size n x n. Define and Q ;. There is an extreme point Â ( S ) of P with What is dependence on ® ? Easy: ® ¸ 1.5, even with n=2. Standard random matrix theory: ® = O ( log n ). Our result: Ideally: ® <2. This would solve Kadison-Singer problem. MSS ‘13: Solved Kadison-Singer, achieve ® = O ( 1 ).

19
Our Main Result Let P ½ [ 0, 1 ] m be a matroid base polytope. Let A 1,…, A m be PSD matrices of size n x n. Define and Q ;. There is an extreme point Â ( S ) of P with, Furthermore, there is a random process that starts at any x 0 2 Q and terminates after m steps at such a point Â ( S ), whp. each step of this process can be performed algorithmically. the entire process can be derandomized.

20
Pipage rounding [Ageev-Svirideno ‘04, Srinivasan ‘01, Calinescu et al. ‘07, Chekuri et al. ‘09] Let P be any matroid polytope. Given fractional x Find coordinates a and b s.t. line z x + z ( e a – e b ) stays in current face Find two points where line leaves P Randomly choose one of those points s.t. expectation is x Repeat until x = Â T is integral x is a martingale: expectation of final Â T is original fractional x. Â T1 Â T2 Â T3 Â T4 Â T5 ÂT6ÂT6 x

21
Definition: “Pessimistic Estimator” Let E µ { 0, 1 } m be an event. Let D ( x ) be the product distribution on { 0, 1 } m with expectation x. Then g : [ 0, 1 ] m ! R is a pessimistic estimator for E if Example: If E is the event { x : w T x > t } then Chernoff bounds give the pessimistic estimator Pessimistic estimators

22
Definition: A function f : R m ! R is concave under swaps if z ! f ( x + z ( e a - e b ) ) is concave 8 x 2 P, 8 a, b 2 [ m ]. Example: is concave under swaps. Pipage Rounding: Let X 0 be initial point and Â T be final point visited by pipage rounding. Claim: If f concave under swaps then E [ f ( Â T )] · f ( X 0 ). [by Jensen] Pessimistic Estimators: Let E be an event and g a pessimistic estimator for E. Claim: Suppose g is concave under swaps. Then Pr[ Â T 2 E ] · g ( X 0 ). Concavity under swaps

23
Matrix Pessimistic Estimators Main Technical Result: g t, µ is concave under swaps. Special case of Tropp ‘12: Let A 1,…, A m be n x n PSD matrices. Let D ( x ) be the product distribution on { 0, 1 } m with expectation x. Let Suppose ¹ ¢ A i ¹ M. Let Then and. ) Tropp’s bound for independent sampling also achieved by pipage rounding Pessimistic estimator

24
Our Variant of Lieb’s Theorem: PD

25
Questions Does Tropp’s matrix concentration bound hold in a negatively dependent scenario? Does our variant of Lieb’s theorem have other uses? O ( max e R e )-spectrally thin trees exist by MSS’13. Can they be constructed algorithmically?

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google