# Prasad Raghavendra University of Washington Seattle Optimal Algorithms and Inapproximability Results for Every CSP?

## Presentation on theme: "Prasad Raghavendra University of Washington Seattle Optimal Algorithms and Inapproximability Results for Every CSP?"— Presentation transcript:

Prasad Raghavendra University of Washington Seattle Optimal Algorithms and Inapproximability Results for Every CSP?

Constraint Satisfaction Problem A Classic Example : Max-3-SAT Given a 3-SAT formula, Find an assignment to the variables that satisfies the maximum number of clauses. Equivalently the largest fraction of clauses

Variables : {x 1, x 2, x 3,x 4, x 5 } Constraints : 4 clauses Constraint Satisfaction Problem Instance : Set of variables. Predicates P i applied on variables Find an assignment that satisfies the largest fraction of constraints. Problem : Domain : {0,1,.. q-1} Predicates :{P 1, P 2, P 3 … P r } P i : [q] k -> {0,1} Max-3-SAT Domain : {0,1} Predicates : P 1 (x,y,z) = x ѵ y ѵ z

Generalized CSP (GCSP) Replace Predicates by Payoff Functions (bounded real valued) Problem : Domain : {0,1,.. q-1} Pay Offs: {P 1, P 2, P 3 … P r } P i : [q] k -> [-1, 1] Pay Off Functions can be Negative Can model Minimization Problems like Multiway Cut, Min-Uncut. Objective : Find an assignment that maximizes the Average Payoff

Examples of GCSPs Max-3-SAT Max Cut Max Di Cut Multiway Cut Metric Labelling 0-Extension Unique Games d- to - 1 Games Label Cover Horn Sat

Unique Games A Special Case E2LIN mod p Given a set of linear equations of the form: X i – X j = c ij mod p Find a solution that satisfies the maximum number of equations. x-y = 11 (mod 17) x-z = 13 (mod 17) … …. z-w = 15(mod 17)

Unique Games Conjecture [Khot 02] An Equivalent Version [Khot-Kindler-Mossel-O’Donnell] For every ε> 0, the following problem is NP-hard for large enough prime p Given a E2LIN mod p system, distinguish between: There is an assignment satisfying 1-ε fraction of the equations. No assignment satisfies more than ε fraction of equations.

Unique Games Conjecture A notorious open problem, no general consensus either way. Hardness Results: No constant factor approximation for unique games. [Feige- Reichman] Algorithm On (1-Є) satisfiable instances [Khot 02] [Trevisan] [Gupta-Talwar] 1 – O(ε logn) [Charikar-Makarychev-Makarychev] [Chlamtac-Makarychev-Makarychev] [Arora-Khot-Kolla-Steurer-Tulsiani-Vishnoi]

Why is UGC important? ProblemBest Approximation Algorithm NP HardnessUnique Games Hardness Vertex Cover Max CUT Max 2- SAT SPARSEST CUT Max k-CSP 2 0.878 0.9401 1.36 0.941 0.9546 1+ε 2 0.878 0.9401 Every Constant UG hardness results are intimately connected to the limitations of Semidefinite Programming

Semidefinite Programming

Max Cut 10 15 3 7 1 1 Input : a weighted graph G Find a cut that maximizes the number of crossing edges

Max Cut SDP Quadratic Program Variables : x 1, x 2 … x n x i = 1 or -1 Maximize 10 15 3 7 1 1 1 1 1 Relax all the x i to be unit vectors instead of {1,-1}. All products are replaced by inner products of vectors Semidefinite Program Variables : v 1, v 2 … v n | v i | 2 = 1 Maximize

MaxCut Rounding v1v1 v2v2 v3v3 v4v4 v5v5 Cut the sphere by a random hyperplane, and output the induced graph cut. - A 0.878 approximation for the problem.

General Boolean 2-CSPs Total PayOff In Integral Solution v i = 1 or -1 V 0 = 1 Triangle Inequality

2-CSP over {0,..q-1} Total PayOff

Arbitrary k-ary GCSP SDP is similar to the one used by [ Karloff-Zwick] Max-3-SAT algorithm. It is weaker than k-rounds of Lasserre / LS+ heirarchies

Results

Two Curves Integrality Gap Curve S(c) = smallest value of the integral solution, given SDP value c. UGC Hardness Curve U(c) = The best polytime computable solution, assuming UGC given an instance with value c. 01 Optimum Solution S(c) U(c) Fix a GCSP If UGC is true: U(c) ≥ S(c) If UGC is false: U(c) is meaningless!

UG Hardness Result Roughly speaking, Assuming UGC, the SDP(I), SDP(II),SDP(III) give best possible approximation for every CSP c = SDP Value S(c) = SDP Integrality Gap U(c) = UGC Hardness Curve Theorem 1: For every constant η > 0, and every GCSP Problem, U(c) < S(c+ η) + η 01 Optimum Solution S(c) U(c)

Consequences If UGC is true, then adding more constraints does not help for any CSP Lovasz-Schriver, Lasserre, Sherali-Adams heirarchies do not yield better approximation ratios for any CSP in the worst case.

Efficient Rounding Scheme Roughly speaking, There is a generic polytime rounding scheme that is optimal for every CSP, assuming UGC. Theorem: For every constant η > 0, and every GCSP, there is a polytime rounding scheme that outputs a solution of value U(c-η) – η c = SDP Value S(c) = SDP Integrality Gap U(c) = UGC Hardness Curve 01 Optimum Solution S(c) U(c)

01 Optimum Solution S(c) U(c) NP-hard algorithm If UGC is true, then for every Generalized Constraint Satisfaction Problem : If UGC is false?? Hardness result doesn’t make sense. How good is the rounding scheme?

Unconditionally Roughly Speaking, For 2-CSPs, the Approximation ratio obtained is at least the red curve S(c) The rounding scheme achieves the integrality gap of SDP for 2-CSPs (both binary and q-ary cases) S(c) = SDP Integrality Gap Theorem: Let A(c) be rounding scheme’s performance on input with SDP value = c. For every constant η > 0 A(c) > S(c- η) - η 01 Optimum Solution S(c)

As good as the best SDP(II) and SDP(III) are the strongest SDPs used in approximation algorithms for 2-CSPs The Generic Algorithm is at least as good as the best known algorithms for 2-CSPs Examples: Max Cut [Goemans-Williamson] Max-2-SAT[Lewin-Livnat-Zwick] Unique Games[Charikar-Makarychev-Makarychev]

Computing Integrality Gaps Theorem: For any η, and any 2-CSP, the curve S(c) can be computed within error η. (Time taken depends on η and domain size q) 01 Optimum Solution S(c) Explicit bounds on the size of an integrality gap instance for any 2-CSP.

Related Work ProblemBest Approximation Algorithm Unique Games Hardness Vertex Cover Max CUT Max 2- SAT SPARSEST CUT Max k-CSP 2 0.878 0.9401 2 [Khot-Regev] 0.878 [Khot-Kindler-Mossel-O’donnell] 0.9401 [Per Austrin] Every Constant [Chawla-Krauthgamer-..] [Trevisan-Samorodnitsky] [Austrin 07] Assuming UGC, and a certain additional conjecture: ``For every boolean 2-CSP, the best approximation is given by SDP(III)” [O’Donnell-Wu 08] Obtain matching approximation algorithm, UGC hardness and SDP gaps for MaxCut

Proof Overview

Dictatorship Test Given a function F : {-1,1} R {-1,1} Toss random coins Make a few queries to F Output either ACCEPT or REJECT F is a dictator function F(x 1,… x R ) = x i F is far from every dictator function (No influential coordinate) Pr[ACCEPT ] = Completeness Pr[ACCEPT ] = Soundness

Connections SDP Gap Instance SDP = 0.9 OPT = 0.7 UG Hardness 0.9 vs 0.7 Dictatorship Test Completeness = 0.9 Soundness = 0.7 [Khot-Kindler-Mossel-O’Donnell] [Khot-Vishnoi] For sparsest cut, max cut. [This Paper] All these conversions hold for every GCSP

A Dictatorship Test for Maxcut Completeness Value of Dictator Cuts F(x) = x i Soundness The maximum value attained by a cut far from a dictator A dictatorship test is a graph G on the hypercube. A cut gives a function F on the hypercube Hypercube = {-1,1} 100

An Example : Maxcut v1v1 v2v2 v3v3 v4v4 v5v5 10 15 3 7 1 1 100 dimensional hypercube Graph G SDP Solution Completeness Value of Dictator Cuts = SDP Value (G) Soundness Given a cut far from every dictator : It gives a cut on graph G with the same value. In other words, Soundness ≤ OPT(G)

From Graphs to Tests 10 15 3 7 1 1 v1v1 v2v2 v3v3 v4v4 v5v5 Graph G (n vertices) 100 dimensional hypercube : {-1,1} 100 SDP Solution For each edge e, connect every pair of vertices in hypercube separated by the length of e Constant independent of size of G H

Completeness E choice of edge e=(u,v) in G [ E X,Y in 100 dim hypercube with dist |u-v|^2 [ (F(X)-F(Y)) 2 ] ] v1v1 v2v2 v3v3 v4v4 v5v5 100 dimensional hypercube 1 1 1 For each edge e, connect every pair of vertices in hypercube separated by the length of e Set F(X) = X 1 (X 1 – Y 1 ) 2 X 1 is not equal to Y 1 with probability |u-v| 2, hence completeness = SDP Value (G)

The Invariance Principle Invariance Principle for Low Degree Polynomials [Rotar] [Mossel-O’Donnell-Oleszkiewich], [Mossel 2008] “If a low degree polynomial F has no influential coordinate, then F({-1,1} n ) and F(Gaussian) have similar distribution.” A generalization of the following fact : ``Sum of large number of {-1,1} random variables has similar distribution as Sum of large number of Gaussian random variables.”

From Hypercube to the Sphere 100 Dimensional hypercube 100 dimensio nal sphere F : [-1,1] Express F as a multilinear polynomial using Fourier expansion, thus extending it to the sphere. P : Real numbers Since F is far from a dictator, by invariance principle, its behaviour on the sphere is similar to its behaviour on hypercube. Nearly always [-1,1]

A Graph on the Sphere 10 15 3 7 1 1 v1v1 v2v2 v3v3 v4v4 v5v5 Graph G (n vertices) 100 dimensional sphere SDP Solution For each edge e, connect every pair of vertices in sphere separated by the length of e S

Hypercube vs Sphere H S F:{-1,1} 100 -> {-1,1} is a cut far from every dictator. P : sphere -> Nearly {-1,1} Is the multilinear extension of F By Invariance Principle, MaxCut value of F on H ≈ Maxcut value of P on S.

Soundness v1v1 v2v2 v3v3 v4v4 v5v5 For each edge e in the graph G connect every pair of vertices in hypercube separated by the length of e S G Alternatively, generate S as follows: Take the union of all possible rotations of the graph G S consists of union of disjoint copies of G. Thus, MaxCut Value of S < Max cut value of G. Hence MaxCut value of F on H is at most the max cut value of G. Soundness ≤ MaxCut(G)

Algorithmically, Given a cut F of the hypercube graph H Extend F to a function P on the sphere using its Fourier expansion. Pick a random rotation of the SDP solution to the graph G This gives a random copy G c of G inside the sphere graph S Output the solution assigned by P to G C

RoughlyFormally Sample R Random Directions Sample R independent vectors : g (1), g (2),.. g (100) Each with i.i.d Gaussian components. Project along them Project each v i along all directions g (1), g (2),.. g (100) Y i (j) = v 0 ∙v i + (1-ε)(v i – (v 0 ∙v i )v 0 ) ∙ g (100) Compute P on projections Compute x i = P(Y i (1), Y i (2),.. Y i (100) ) Round the output of P If x i > 1, x i = 1 If x i < -1, x i = -1 If x i is in [-1,1] x i = 1 with probability (1+x i )/2 -1 with probability (1-x i )/2 Given the Polynomial P(y 1,… y 100 )

Key Lemma Any CSP Instance G DICT G Dictatorship Test on functions F : {-1,1} n ->{-1,1} If F is far from a dictator, Round F (G) ≈ DICT G (F) 1) Tests of the verifier are same as the constraints in instance G 2) Completeness = SDP(G) Any Function F: {-1,1} n → {-1,1} Round F Rounding Scheme on CSP Instances G

UG Hardness Result Instance SDP = c OPT = s Dictatorship Test Completeness = c Soundness <= s UG Hardness Completeness = c Soundness <= s Worst Case Gap Instance Theorem 1: For every constant η > 0, and every GCSP Problem, U(c) < S(c+ η) + η

Generic Rounding Scheme Solve SDP(III) to obtain vectors (v 1,v 2,… v n ) Add little noise to SDP solution (v 1,v 2,… v n ) For all multlinear polynomials P(y 1,y 2,.. y 100 ) do Round using P(y 1,y 2,.. y 100 ) Output the best solution obtained P is Multilinear polynomial in 100 variables with coefficients in [-1,1]

Algorithm Instance I SDP = c OPT = ? Any Dictatorship Test Completeness = c UG Hardness Completeness = c Soundness of any Dictatorship Test ≥ U(c) There is some function F : {0,1} R -> {0,1} that has Pr[F is accepted] ≥ U(c) By Key Lemma, Performance of F as rounding polynomial on instance I = Pr[F is accepted] > U(c) Dictatorship Test (I) Completeness = c

Related Developments Multiway Cut and Metric Labelling problems. Maximum Acyclic Subgraph problem Bipartite Quadratic Optimization Problem (Computing the Grothendieck constant) [Manokaran, Naor, Schwartz, Raghavendra] [Guruswami,Manokaran, Raghavendra] [Raghavendra,Steurer]

Conclusions Unique Games and Invariance Principle connect : Integrality Gaps, Hardness Results,Dictatorship tests and Rounding Algorithms. These connections lead to new algorithms, and hardness results unifying several known results.

Thank You

Rounding Scheme (For Boolean CSPs) Rounding Scheme was discovered by the reversing the soundness analysis. This fact was independently observed by Yi Wu

MaxCut Rounding v1v1 v2v2 v3v3 v4v4 v5v5 Cut the sphere by a random hyperplane, and output the induced graph cut. Equivalently, Pick a random direction g. For each vector v i, project v i along g y i = v i. g Assign x i = 1 if y i > 0 x i = 0 otherwise.

SDP Rounding Schemes SDP Vectors (v 1, v 2.. v n ) Projections (y 1, y 2.. y n ) Assignment Random Projection Process the projections For any CSP, it is enough to do the following: Instead of one random projection, pick sufficiently many (say 100) projections Use a multi linear polynomial P to process the projections

UG Hardness Results Instance SDP = c OPT = s Dictatorship Test Completeness = c Soundness <= s UG Hardness Completeness = c Soundness <= s Worst Case Gap Instance Theorem 1: For every constant η > 0, and every GCSP Problem, U(c) < S(c+ η) + η

Multiway Cut and Labelling Problems Theorem: Assuming Unique Games Conjecture, The earthmover linear program gives the best approximation. Theorem: Unconditionally, the simple SDP does not give better approximations than the LP. 10 15 3 7 1 1 3-Way Cut: Separate the 3-terminals while separating the minimum number of edges [Manokaran, Naor, Schwartz, Raghavendra]

Maximum Acyclic Subgraph Given a directed graph, order the vertices to maximize the number of forward edges. [Guruswami,Manokaran, Raghavendra] Theorem: Assuming Unique Games Conjecture, The best algorithm’s output is as good as a random ordering. Theorem: Unconditionally, the simple SDP does not give better approximations than random.

The Grothendieck Constant The Grothendieck constant is the smallest constant k(H) for which the following inequality holds for all matrices : The constant is just the integrality gap of the SDP for bipartite quadratic optimization. Value of the constant is between 1.6 and 1.7 but is unknown yet. [Raghavendra,Steurer]

Grothendieck Constant [Raghavendra,Steurer] Theorem: There is an algorithm to compute arbitrarily good approximations to the Grothendieck constant. Theorem: There is an efficient algorithm that solves the bipartite quadratic optimization problem to an approximation equal to Grothendieck constant.

If all this looks deceptively simple, then it is because there was deception Working with several probability distributions at once.

UG Hardness Results Instance SDP = c OPT = s Dictatorship Test Completeness = c Soundness <= s UG Hardness Completeness = c Soundness <= s Worst Case Gap Instance Best UG Hardness = Integrality Gap U(c) < S(c+η) + η

Algorithm Instance I SDP = c OPT = ? Any Dictatorship Test Completeness = c UG Hardness Completeness = c Soundness of any Dictatorship Test ≥ U(c) There is some function F : {0,1} R -> {0,1} that has Pr[F is accepted] ≥ U(c) By Key Lemma, Performance of F as rounding polynomial on instance I = Pr[F is accepted] > U(c) Dictatorship Test (I) Completeness = c

On some instance I with SDP value = c, algorithm outputs a solution with value s. For every function F far from dictator, Performance of F in rounding I ≤ s By Key Lemma, For every such F Pr[ F is accepted by Dict(I) ] ≤ s Thus the Dict(I) is a test with soundness s. Unconditional Results For 2-CSPs

Dictatorship Test(I) Completeness = c Soundness = s UG Hardness Completeness = c Soundness = s UG Integrality Gap instance Integrality Gap instance SDP = c OPT ≤ s Algorithm’s performance matches the integrality gap of the SDP [Khot-Vishnoi]

Computing Integrality Gaps Integrality gap of a SDP relaxation = Worst case ratio of Integral Optimum SDP Optimum Worst Case over all instances - an infinite set Due to tight relation of integrality gaps/ dictatorship tests for 2-CSPs Integrality gap of a SDP relaxation = Worst case ratio of Soundness Completeness This time the worst case is along all dictatorship tests on {-1,1} R - a finite set that can be discretized.

Key Lemma : Through An Example 1 SDP: Variables : v 1, v 2,v 3 |v 1 | 2 = |v 2 | 2 = |v 3 | 2 =1 Maximize 2 3

E[a 1 a 2 ] = v 1 ∙ v 2 E[a 1 2 ] = |v 1 | 2 E[a 2 2 ] = |v 2 | 2 For every edge, there is a local distribution over integral solutions such that: All the moments of order at most 2 match the inner products. Local Random Variables 1 3 2 Fix an edge e = (1,2). There exists random variables a 1 a 2 taking values {-1,1} such that: c = SDP Value v 1, v 2, v 3 = SDP Vectors A 12 A 13 A 23

Dictatorship Test Pick an edge (i,j) Generate a i,a j in {-1,1} R as follows: The k th coordinates a ik,a jk come from distribution A ij Add noise to a i,a j Accept if F(a i ) ≠ F(a j ) c = SDP Value v 1, v 2, v 3 = SDP Vectors A 12,A 23,A 31 = Local Distributions 1 3 2 A 12 Input Function: F : {-1,1} R -> {-1,1} Max Cut Instance

Analysis Pick an edge (i,j) Generate a i,a j in {-1,1} R as follows: The k th coordinates a ik,a jk come from distribution A ij Add noise to a i,a j Accept if F(a i ) ≠ F(a j ) A 12,A 23,A 31 = Local Distributions 1 3 2 Max Cut Instance Input Function: F : {-1,1} R -> {-1,1}

Completeness A 12,A 23,A 31 = Local Distributions Input Function is a Dictator : F(x) = x 1 Suppose (a 1,a 2 ) is sampled from A 12 then : E[a 11 a 21 ] = v 1 ∙ v 2 E[a 11 2 ] = |v 1 | 2 E[a 21 2 ] = |v 2 | 2 Summing up, Pr[Accept] = SDP Value(v 1, v 2,v 3 )

E[b 1 b 2 ] = v 1 ∙ v 2 E[b 2 b 3 ] = v 2 ∙ v 3 E[b 3 b 1 ] = v 3 ∙ v 1 E[b 1 2 ] = |v 1 | 2 E[b 2 2 ] = |v 2 | 2 E[b 3 2 ] = |v 3 | 2 There is a global distribution B=(b 1,b 2,b 3 ) over real numbers such that: All the moments of order at most 2 match the inner products. Global Random Variables c = SDP Value v 1, v 2, v 3 = SDP Vectors g = random Gaussian vector. (each coordinate generated by i.i.d normal variable) b 1 = v 1 ∙ g b 2 = v 2 ∙ g b 3 = v 3 ∙ g 1 3 2 B

Rounding with Polynomials Input Polynomial : F(x 1,x 2,.. x R ) Generate b 1 = (b 11,b 12,… b 1R ) b 2 = (b 21,b 22,… b 2R ) b 3 = (b 31,b 32,… b 3R ) with each coordinate (b 1t,b 2t,b 3t ) according to global distribution B Compute F(b 1 ),F(b 2 ),F(b 3 ) Round F(b 1 ),F(b 2 ),F(b 3 ) to {-1,1} Output the rounded solution. 1 3 2 B

Invariance Suppose F is far from every dictator then since A 12 and B have same first two moments, F(a 1 ),F(a 2 ) has nearly same distribution as F(b 1 ),F(b 2 ) F(b 1 ), F(b 2 ) are close to {-1,1}

From Gap instances to Gap instances Instance SDP = c OPT = s Dictatorship Test Completeness = c Soundness = s UG Hardness Completeness = c Soundness = s UG Gap instance for a Strong SDP A Gap Instance for the Strong SDP for CSP

For each variable u in CSP, Introduce q variables : {u 0, u 1,.. u q-1 } u c = 1, u i = 0 for i≠c Payoff for u,v : P(u,v) = ∑ a ∑ b P(a,b)u a v b 2-CSP over {0,..q-1} u = c

2-CSP over {0,..q-1} Total PayOff

Arbitrary k-ary GCSP SDP is similar to the one obtained by k-rounds of Lasserre

Rounding Scheme (For Boolean CSPs) Rounding Scheme was discovered by the reversing the soundness analysis. This fact was independently observed by Yi Wu

SDP Rounding Schemes SDP Vectors (v 1, v 2.. v n ) Projections (y 1, y 2.. y n ) Assignment Random Projection Process the projections For any CSP, it is enough to do the following: Instead of one random projection, pick sufficiently many projections Use a multilinear polynomial P to process the projections

RoughlyFormally Sample R Random Directions Sample R independent vectors : w (1), w (2),.. w (R) Each with i.i.d Gaussian components. Project along them Project each v i along all directions w (1), w (2),.. w (R) Y i (j) = v 0 ∙v i + (1-ε)(v i – (v 0 ∙v i )v 0 ) ∙ w (j) Compute P on projections Compute x i = P(Y i (1), Y i (2),.. Y i (R) ) Round the output of P If x i > 1, x i = 1 If x i < -1, x i = -1 If x i is in [-1,1] x i = 1 with probability (1+x i )/2 -1 with probability (1-x i )/2 Rounding By Polynomial P(y 1,… y R )

Algorithm Solve SDP(III) to obtain vectors (v 1,v 2,… v n ) Smoothen the SDP solution (v 1,v 2,… v n ) For all multlinear polynomials P(y 1,y 2,.. y R ) do Round using P(y 1,y 2,.. y R ) Output the best solution obtained R is a constant parameter

“For all multilinear polynomials P(y 1,y 2,.. y R ) do” - All multilinear polynomials with coefficients bounded within [-1,1] - Discretize the set of all such multi-linear polynomials There are at most a constant number of such polynomials. Discretization

Smoothening SDP Vectors Let u 1,u 2.. u n denote the SDP vectors corresponding to the following distribution over integral solutions: ``Assign each variable uniformly and independently at random” Substitute v i * ∙ v j * = (1-ε) (v i ∙ v j ) + ε (u i ∙ u j )

Non-Boolean CSPs There will be q rounding polynomials instead of one polynomial. Projection is in the same fashion: Y i (j) = v 0 ∙v i + (1-ε)(v i – (v 0 ∙v i )v 0 ) ∙ w (j) To Round the Output of the polynomial, do the following:

From Gap instances to Gap instances Instance SDP = c OPT = s Dictatorship Test Completeness = c Soundness = s UG Hardness Completeness = c Soundness = s UG Gap instance for a Strong SDP A Gap Instance for the Strong SDP for CSP Worst Case Instance

Backup Slides

Rounding for larger domains

Remarks For every CSP and every ε > 0, there is a large enough constant R such that Approximation achieved is within ε of optimal for all CSPs if Unique Games Conjecture is true. For 2-CSPs, the approximation ratio is within ε of the integrality gap of the SDP(I).

Rounding Schemes Very different rounding schemes for every CSP. with often complex analysis. Max Cut - Random hyperplane cutting Multiway cut - Complicated Cutting the simplex. Our algorithm is a generic rounding procedure. Analysis does not compute the approximation factor, but indirectly shows that it is equal to the integrality gap.

“Sample R independent vectors : w 1, w 2,.. w R each with i.i.d Gaussian components. For all multlinear polynomials P(y 1,y 2,.. y R ) do Compute x i = P(v i ∙ w 1, v i ∙ w 2,.. v i ∙ w R )” Goemans-Williamson rounding uses one single random projection, this algorithm uses a constant number of random projections.

Semidefinite Programming Linear program over the inner products Strongest algorithmic tool in approximation algorithms Used in a large number of algorithms. Integrality gap of a SDP relaxation = Worst case ratio of Integral Optimum SDP Optimum

More Constraints? Most SDP algorithms use simple relaxations with few constraints. [Arora-Rao-Vazirani] used the triangle inequalities to get sqrt(log n) approximation for sparsest cut. Can the stronger SDPs yield better approximation ratios for problems of interest?

Max Cut 10 15 3 7 1 1 Input : a weighted graph G Find a cut that maximizes the number of crossing edges

Max Cut SDP Quadratic Program Variables : x 1, x 2 … x n x i = 1 or -1 Maximize 10 15 3 7 1 1 1 1 1 Relax all the x i to be unit vectors instead of {1,-1}. All products are replaced by inner products of vectors Semidefinite Program Variables : v 1, v 2 … v n | v i | 2 = 1 Maximize

Semidefinite Program Variables : v 1, v 2 … v n | v i | 2 = 1 Maximize Max Cut SDP 10 15 3 7 1 1 1 1 1