Presentation is loading. Please wait.

Presentation is loading. Please wait.

Generic Rounding Schemes for SDP Relaxations

Similar presentations


Presentation on theme: "Generic Rounding Schemes for SDP Relaxations"— Presentation transcript:

1 Generic Rounding Schemes for SDP Relaxations
Prasad Raghavendra Georgia Institute of Technology, Atlanta

2 ``Squish and Solve” Rounding Schemes [R,Steurer 2009]
Rounding Schemes via Dictatorship Tests [R,2008] Rounding SDP Hierarchies via Correlation [Barak,R,Steurer 2011] [R,Tan 2011]

3 ``Squish and Solve” Rounding Schemes [R,Steurer 2009]

4 Fraction of crossing edges
Max Cut Max CUT Input: A weighted graph G Find: A Cut with maximum number/weight of crossing edges 10 15 7 1 1 3 What is semidefinite programming? We will explain it with maxcut, . Explain Maxcut. Fraction of crossing edges

5 MaxCut ¼ (Average Squared Length of the edges) -1 1 Max Cut Problem
Given a graph G, Find a cut that maximizes the number of crossing edges 1 10 -1 15 1 7 1 1 1 -1 -1 -1 3 Semidefinite Program: [Goemans-Williamson 94] Embedd the graph on the N - dimensional unit ball, Maximizing ¼ (Average Squared Length of the edges) Semidefinite Program [Goemans-Williamson 94] Variables : v1 , v2 … vn |vi|2 = 1 Maximize -1 Maxcut SDP relaxation, introduced by Goemans Williamson.. v1 v2 v3 v4 v5

6 MaxCut Rounding v2 Cut the sphere by a random hyperplane, and output the induced graph cut. A approximation for the problem. [Goemans-Williamson] v1 v3 v5 v4

7 Squish and SOLVE ROUNDING

8 Approximation using Finite Models
1 10 15 3 7 1 -1 approximate solution for = ¦-CSP Instance = 1 -1 -1 1 1 -1 -1 variable folding unfolding of the assignment (identifying variables) -1 1 ¦-CSP Instance =finite optimal solution for =finite 1 constant time -1 Challenge: ensure = finite has a good solution

9 Approximation using Finite Models
PTAS for dense instances General Method for CSPs [Frieze-Kannan] For a dense instance =, it is possible to construct finite model =finite OPT(=finite) ≥ (1-ε) OPT(=) What we will do : SDP value (=finite) > (1-ε)SDP value (=)

10 Analysis of Rounding Scheme
¦-CSP Instance = ¦-CSP Instance =finite SDP value SDP value > ® - ² unfolding rounded value OPT value Hence: rounding-ratio for = < (1+²) integrality-ratio for = finite

11 Constructing FINITE MODELS (MAXCUT)

12 STEP 1 : Dimension Reduction
v1 v2 v3 v4 v5 STEP 1 : Dimension Reduction Pick d = 1/ Є4 random Gaussian vectors {G1 , G2 , .. Gd} Project the SDP solution along these directions. Map vector V V → V’ = (V∙G1 , V∙G2 , … V∙Gd) v2 STEP 2 : Surgery Scale every vector V’ to unit length v2 v1 v3 STEP 3 : Discretization Pick an Є –net for the d dimensional sphere Move every vertex to the nearest point in the Є –net v4 v5 Constant dimensions FINITE MODEL Graph on Є –net points

13 To Show: SDP value (=finite) > (1-ε)SDP value (=)
Johnson Lindenstrauss Lemma : “Distances are almost preserved under random projections” If V’,U’ are random projections of unit vectors U, V on 1/ ε4 directions, Pr [ |V∙U – V’∙U’| > ε] < ε2

14 To Show: SDP value (=finite) > (1-ε)SDP value (=)
For SDP value (=) Contribution of an edge e = (U,V) |U-V|2 = 2-2 V∙U SDP Vectors for =finite = Corresponding vectors in Є –net STEP 1 With probability > 1- Є2 , | |U-V|2 - |U’-V’|2 | < 2Є STEP 1 : Dimension Reduction Project the SDP solution along 1/ Є4 random directions. STEP 2 With probability > 1- 2Є2 , 1+ Є < |V’|2 ,|U’|2 < 1- Є, Normalization changes distance by at most 2Є STEP 2 : Surgery Scale every vector V’ to unit length STEP 3 : Discretization Pick an Є –net for the d dimensional sphere Move every vertex to the nearest point in the Є –net STEP 3 Changes edge length by at most 2Є

15 To Show: SDP value (=finite) > (1-ε)SDP value (=)
For SDP value (=) Contribution of an edge e = (U,V) |U-V|2 = 2-2 V∙U SDP Vectors for =finite = Corresponding vectors in Є –net ANALYSIS With probability 1-3Є2, The contribution of edge e changes by < 6Є In expectation, For (1-3Є2) edges, the contribution of edge e changes by < 6Є SDP value (=finite) > SDP value (=) - 6Є – 3Є2 STEP 1 With probability > 1- Є2 , | |U-V|2 - |U’-V’|2 | < 2Є STEP 2 With probability > 1- 2Є2 , 1+ Є < |V’|2 ,|U’|2 < 1- Є, Normalization changes distance by at most 2Є STEP 3 Changes edge length by at most 2Є

16 Generic Rounding For CSPs
[Raghavendra Steurer08] For any CSP ¦ and any ²>0, there exists an efficient algorithm A, (1-²) integrality gap of a natural SDP ( ¦ ) (SDP is optimal under UGC) rounding – ratioA ( ¦ ) (approximation ratio) = Drawbacks Running Time(A) On CSP over alphabet size q, arity k No explicit approximation ratio Unifies a large number of existing rounding schemes, and the resulting algorithm A as good as all known algorithms for CSPs (without dependence on n)

17 Computing Integrality Gaps
Theorem: For any CSP ¦ and any ²>0, there exists an algorithm A to compute integrality gap (¦) within an accuracy ² Run through all instances of size exp(poly(k,q,1/²) Running Time(A) On CSP over alphabet size q, arity k

18 Rounding Schemes via Dictatorship Tests

19 Dictatorship Test Given a function F : {-1,1}R {-1,1}
Toss random coins Make a few queries to F Output either ACCEPT or REJECT F is a dictator function F(x1 ,… xR) = xi F is far from every dictator function (No influential coordinate) Pr[ACCEPT ] = Completeness Pr[ACCEPT ] = Soundness

20 UG Hardness A dictatorship test where
Rule of Thumb: [Khot-Kindler-Mossel-O’Donnell] A dictatorship test where Completeness = c and Soundness = αc the verifier’s tests are predicates from a CSP Λ It is UG-hard to approximate CSP Λ to a factor better than α Need more work to show that algorithm is at least as good as all known algorithms

21 A Dictatorship Test for Maxcut
A dictatorship test is a graph G on the hypercube. A cut gives a function F on the hypercube Completeness Value of Dictator Cuts F(x) = xi Gives a “finite model”, so finding the best cut reduces to finding a cut far from dictator in a constant sized hypercube Soundness The maximum value attained by a cut far from a dictator Hypercube = {-1,1}100

22 Value of Dictator Cuts =
Overview v1 v2 v3 v4 v5 10 15 3 7 1 Completeness Value of Dictator Cuts = SDP Value (G) Graph G SDP Solution Soundness Given a cut far from every dictator : It gives a cut on graph G with the same value. Rounding Scheme: Construct the dictatorship test gadget from graph G Try all possible cuts far from dictator, and obtain a cut back in the graph G. Gives a “finite model”, so finding the best cut reduces to finding a cut far from dictator in a constant sized hypercube Guarantee: Algorithm’s Output Value ≥ Soundness of the Dictatorship Test Gadget 100 dimensional hypercube

23 Cant get better approximation assuming UGC!
UG Hardness UG Hardness “On instances, with value C, it is NP-hard to output a solution of value S, assuming UGC” Dictatorship Test Completeness C Soundness S [KKMO] In our case, Completeness = SDP Value (G) Soundness < Algorithm’s Output Need more work to show that algorithm is at least as good as all known algorithms Cant get better approximation assuming UGC!

24 Value of Dictator Cuts =
The Goal v1 v2 v3 v4 v5 10 15 3 7 1 Completeness Value of Dictator Cuts = SDP Value (G) Graph G SDP Solution Soundness Given a cut far from every dictator : It gives a cut on graph G with the same value. Gives a “finite model”, so finding the best cut reduces to finding a cut far from dictator in a constant sized hypercube 100 dimensional hypercube

25 Influences Definition: Influence of the ith co-ordinate on a
function F:{0,1}R  [-1,1] under a product distribution μR is defined as: Infiμ (F) = E [ Variance [F] ] over changing the ith coordinate as per μ Random Fixing of All Other Coordinates from μR-1 (For the ith dictator function : Infiμ (F) is as large as variance of F) Definition: A function is τ-quasirandom if for all i, Infiμ (F) ≤ τ

26 ¼ (Average Squared Length of the edges)
Dimension Reduction Max Cut SDP: Embed the graph on the N - dimensional unit ball, Maximizing ¼ (Average Squared Length of the edges) 100 v1 v2 v3 v4 v5 Project to random 1/ Є2 dimensional space. Constant dimensional hyperplane New SDP Value = Old SDP Value + or - Є

27 Making the Instance Harder
v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 SDP Value = Average Squared Length of an Edge Transformations Rotation does not change the SDP value. Union of two rotations has the same SDP value Rotation does not affect SDP value. Nor does union. So construct the sphere graph. A good cut for sphere graph implies a good cut for original graph. Sphere Graph H : Union of all possible rotations of G. SDP Value (Graph G) = SDP Value ( Sphere Graph H)

28 Making the Instance Harder
v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 MaxCut (H) = S MaxCut (G) ≥ S Pick a random rotation of G and read the cut induced on it. Thus, Rotation does not affect SDP value. Nor does union. So construct the sphere graph. A good cut for sphere graph implies a good cut for original graph. v1 v2 v3 v4 v5 MaxCut (H) ≤ MaxCut(G) SDP Value (G) = SDP Value (H)

29 v1 v2 v3 v4 v5 Hypercube Graph For each edge e, connect every pair of vertices in hypercube separated by the length of e SDP Solution Generate Edges of Expected Squared Length = d 1) Starting with a random x Є {-1,1}100 , 1) Generate y by flipping each bit of x with probability d/4 Output (x,y) Hypercube graph . SDP value = SDP(G), and Integral value = SDP. 100 dimensional hypercube : {-1,1}100

30 Dichotomy of Cuts Dictator Cuts F(x) = xi Cuts Far From Dictators
1 1 1 1 A cut gives a function F on the hypercube F : {-1,1}100 -> {-1,1} -1 Dictator Cuts F(x) = xi Cuts Far From Dictators (influence of each coordinate on function F is small) -1 -1 Gives a “finite model”, so finding the best cut reduces to finding a cut far from dictator in a constant sized hypercube Hypercube = {-1,1}100

31 Dictator Cuts v1 v2 v u v5 X For each edge e = (u,v), connect every pair of vertices in hypercube separated by the length of e Y 100 dimensional hypercube Pick an edge e = (u,v), consider all edges in hypercube corresponding to e Number of bits in which X,Y differ = |u-v|2/4 Fraction of red edges cut by horizontal dictator . Fraction of dictators that cut one such edge (X,Y) = = Fraction of edges cut by dictator = ¼ Average Squared Distance Value of Dictator Cuts = SDP Value (G)

32 Cuts far from Dictators
v1 v2 v3 v4 v5 -1 -1 -1 1 1 1 100 dimensional hypercube v1 v2 v3 v4 v5 Intuition: Sphere graph : Uniform on all directions Hypercube graph : Axis are special directions If a cut does not respect the axis, then it should not distinguish between Sphere and Hypercube graphs.

33 The Invariance Principle
Central Limit Theorem ``Sum of large number of {-1,1} random variables has similar distribution as Sum of large number of Gaussian random variables.” Invariance Principle for Low Degree Polynomials [Rotar] [Mossel-O’Donnell-Oleszkiewich], [Mossel 2008] “If a low degree polynomial F has no influential coordinate, then F({-1,1}n) and F(Gaussian) have similar distribution.”

34 H Hypercube vs Sphere P : sphere -> Nearly {-1,1}
is the multilinear extension of F F:{-1,1}100 -> {-1,1} is a cut far from every dictator. By Invariance Principle, MaxCut value of F on hypercube ≈ Maxcut value of P on Sphere graph H

35 Rounding SDP Hierarchies via Correlation
[Barak,R,Steurer 2011] [R,Tan 2011]

36 The Unique Games Barrier
It is Unique Games-Hard to approximate to a factor better than that given by Simple SDP Relaxation for Constraint Satisfaction Problems [R08] Metric Labelling Problems [Manokaran-Naor-R.-Schwartz 08] Ordering Constraint Satisfaction Problems [Guruswami-Hastad-Manokaran-R. ] Kernel Clustering Problems [Khot Naor 09] Grothendieck Problem [R.-Steurer 09] Monotone-Hard-Constraint CSPs [Kumar-Manokaran-Tulsiani-Vishnoi]

37 For the non-believers [R-Steurer 09]
Unconditionally, Adding all valid constraints on at most 2^O((loglogn)1/4) variables to the simple SDP does not improve the approximation ratio for Constraint Satisfaction Problems Metric Labelling Problems Ordering Constraint Satisfaction Problems Kernel Clustering Problems Grothendieck Problem

38 Stronger SDP Relaxations
Possibility: ``Certain Strong SDP Relaxations yield better approximations and disprove the Unique Games Conjecture” (five rounds of Lasserre hierarchy) Even Otherwise: For what problems do these relaxations help? How does one use these stronger SDP relaxations?

39 Difficulty Successes of Stronger SDP Relaxations: [Arora-Rao-Vazirani] used an SDP with triangle inequalities to improve approximation for Sparsest Cut from log n to sqrt(log n). Stronger SDPs for better approximations for graph and hypergraph independent set in [Chlamtac] [Arora-Charikar-Chlamtac] [Chlamtac-Singh] Very few general techniques to extract the power of stronger SDP relaxations. .

40 SDP for MaxCut Semidefinite Program Variables : v1 , v2 … vn
| vi |2 = 1 Maximize Quadratic Program Variables : x1 , x2 … xn xi = 1 or -1 Maximize 10 15 3 7 1 -1 Relax all the xi to be unit vectors instead of {1,-1}. All products are replaced by inner products of vectors Maxcut semidefinite programming relaxation, can be solvable in cubic time ( in linear time [Kale]) Ideally, these vectors are convex combination of integral solutions. -- the SDP can be thought of as a distribution over cuts Instead, we force vectors to look like integral solutions locally (on every k vertices)

41 k-round Lasserre-SDP for MaxCut
Local distribution μS For any subset S of k vertices, A local distribution μS over {+1,-1} assignments to the set S 10 15 3 7 1 -1 X1 X2 X3 X4 …………….. X15 …………………. Xn Conditioned SDP Vectors {vi|Sα} For any subset S of k vertices, and an assignment α in {-1,1}k , An SDP solution {vi|Sα} corresponding to the SDP solution conditioned on S being assigned α …………… …………… …………… …………… -…………………………………………………………………… …………… ……………

42 Conditional Entropy of X|Y
Correlations Correlation: `` Two random variables are correlated, if the fixing the value of one changes the distribution of the other’’ Measuring Correlation: Mutual information between the two random variables. Entropy of X Conditional Entropy of X|Y Mutual Information I(X,Y) = H(X) - H(X|Y)

43 Global Correlation Global Correlation is the average correlation between random pairs of vertices in the instance. GC = E {a,b}[ I(Xa , Xb) ] Crucial Observation Conditioning the SDP solution on the value of a random vertex Xa reduces average entropy by GC Proof: average entropy = E{b} H(Xb) average entropy after conditioning Xa = E{a} [E{b} H(Xb | Xa)] Hence the decrease is E{b} H(Xb) - E{a} [E{b} H(Xb | Xa)] = E{a,b} [H(Xb)- H(Xb | Xa)] = E{a,b} [I(Xb , Xa)]

44 Progress By Global Correlations
Suppose an SDP solution has global correlation > ε, Then we sample and condition on the value of a random vertex, Average entropy drops by ε If global correlation always remains > ε, then after 1/ ε conditionings, the average entropy ≈ 0  The variables are almost frozen, and the conditioned SDP solution is nearly integral. Corollary Within O(1/ ε) conditionings, the global correlation of the SDP solution becomes < ε

45 Application: Max Bisection
Input: A weighted graph G Find: A Cut with maximum number/weight of crossing edges with exactly ½ of the vertices on each side of the cut. 10 15 7 1 1 3 What is semidefinite programming? We will explain it with maxcut, . Explain Maxcut.

46 Halfspace Rounding? Cut the sphere by a random hyperplane, and output the induced graph cut. v2 The expected fraction of vertices on each side of the cut is half. However, the actual number of vertices might always be away from half -- no concentration v1 v3 v5 Independence among random variables  Concentration (Ex: Chernoff bounds) v4 Lack of concentration  lack of independence

47 Bounding Variance Let Z1 , Z2 , .. Zn denote the random projections, Suppose the rounding function is F : R  [0,1] Fraction of vertices on one side of the cut = E{a} [F(Za)] Variance of this random variable= EZ [ E{a,b} [F(Za)F(Zb)] - E{a} [F(Za)]E{b} [F(Zb)] ] = E{a,b} [ Covariance(F(Za), F(Zb)) ] Low global correlation  E{a,b} [I(Za ,Zb)] is small  the above variance is small.

48 CSPs with Global Cardinality Constraint
[R, Tan 2011] Given an instance of Max Bisection/Min Bisection with value 1-ε, there is an algorithm running in time npoly(1/ε) that finds a solution of value 1-O(ε1/2) [R, Tan 2011] For every CSP with global cardinality constraint, there is a corresponding dictatorship test whose Soundness/Completeness = Integrality gap of poly(1/ ε) - round Lasserre SDP.

49 Another Application: 2-CSPs on ``expanding instances”
Locally, the constraints of the CSP introduce correlations among the variables. If the graph is a sufficiently good expander, these local correlations must translate in to global correlations. 10 15 3 7 1 -1

50 Low-Rank Graphs -1 1 If the adjacency matrix of the graph is “low rank” – approximated by few eigen vectors. -1 1 10 -1 15 1 7 1 1 1 -1 -1 -1 3 -1 Lemma: If the number of eigen values > δ is less than d, then an SDP solution with local correlation > δ has global correlation O(1/d2)

51 2-CSP on random constraint graphs
[Barak-Raghavendra-Steurer] Given an instance of 2-CSP whose constraint graph is a degree d random graph, poly(1/ε, k, d) round Lasserre SDP hierarchy has value < Optimum + od(1)

52 Another Application Given an instance of Unique Games with value 1-ε,
Subexponential Time Algorithm for Unique Games [Arora-Barak-Steurer] Given an instance of Unique Games with value 1-ε, in time exp(nε), the algorithm finds a solution of value 1-εc Used a combination of brute force and spectral decomposition, but no SDPs Subexponential Time Algorithm for Unique Games via SDPs [Barak-Raghavendra-Steurer] [Guruswami-Sinop] Given an instance of Unique Games such that nO(ε) – rounds of SDP hierarchy has value with value 1-ε, there exists an assignment of value 1-εc

53 Future Work Can one use local-global correlations to prove [Arora-Rao-Vazirani] or something weaker? subexponential time algorithms beating the current best for MaxCut, Sparsest Cut?

54 Thank You

55 Rounding Case 1: average entropy < ε,
The SDP solution is nearly integral (it can be rounded to integral solution with value c – O(ε) ) 10 15 3 7 1 -1 X1 X2 X3 X4 …………….. X15 …………………. Xn …………… Case 2: average entropy > ε, if we condition on a random vertex, the average entropy drops by δ …………… …………… …………… -…………………………………………………………………… …………… ……………

56 Do instances have this global correlation property arise?
Main Theorem (Informal): If an instance I of a problem satisfies (c,ε,δ)-global correlation property, Then, (1/δ)-round SDP solution on the instance I is within O(ε) of the integral value. Do instances have this global correlation property arise?


Download ppt "Generic Rounding Schemes for SDP Relaxations"

Similar presentations


Ads by Google