Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)

Slides:



Advertisements
Similar presentations
New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Advertisements

Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Subhash Khot IAS Elchanan Mossel UC Berkeley Guy Kindler DIMACS Ryan O’Donnell IAS.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Dana Moshkovitz MIT Joint work with Subhash Khot, NYU 1.
The Max-Cut problem: Election recounts? Majority vs. Electoral College? 7812.
Hardness of Robust Graph Isomorphism, Lasserre Gaps, and Asymmetry of Random Graphs Ryan O’Donnell (CMU) John Wright (CMU) Chenggang Wu (Tsinghua) Yuan.
An Efficient Membership-Query Algorithm for Learning DNF with Respect to the Uniform Distribution Jeffrey C. Jackson Presented By: Eitan Yaakobi Tamar.
Gillat Kol joint work with Irit Dinur Covering CSPs.
MaxClique Inapproximability Seminar on HARDNESS OF APPROXIMATION PROBLEMS by Dr. Irit Dinur Presented by Rica Gonen.
Ryan O’Donnell & Yi Wu Carnegie Mellon University (aka, Conditional hardness for satisfiable 3-CSPs)
Kevin Matulef MIT Ryan O’Donnell CMU Ronitt Rubinfeld MIT Rocco Servedio Columbia.
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
Learning Juntas Elchanan Mossel UC Berkeley Ryan O’Donnell MIT Rocco Servedio Harvard.
Constraint Satisfaction over a Non-Boolean Domain Approximation Algorithms and Unique Games Hardness Venkatesan Guruswami Prasad Raghavendra University.
3-Query Dictator Testing Ryan O’Donnell Carnegie Mellon University joint work with Yi Wu TexPoint fonts used in EMF. Read the TexPoint manual before you.
Subhash Khot Georgia Tech Ryan O’Donnell Carnegie Mellon SDP Gaps and UGC-Hardness for Max-Cut-Gain &
Conditional Inapproximability and Limited Independence
Introduction to PCP and Hardness of Approximation Dana Moshkovitz Princeton University and The Institute for Advanced Study 1.
A 3-Query PCP over integers a.k.a Solving Sparse Linear Systems Prasad Raghavendra Venkatesan Guruswami.
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
Umans Complexity Theory Lectures Lecture 15: Approximation Algorithms and Probabilistically Checkable Proofs (PCPs)
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Inapproximability from different hardness assumptions Prahladh Harsha TIFR 2011 School on Approximability.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Computational problems, algorithms, runtime, hardness
Venkatesan Guruswami (CMU) Yuan Zhou (CMU). Satisfiable CSPs Theorem [Schaefer'78] Only three nontrivial Boolean CSPs for which satisfiability is poly-time.
Semidefinite Programming
Computer Assisted Proof of Optimal Approximability Results Uri Zwick Uri Zwick Tel Aviv University SODA’02, January 6-8, San Francisco.
Vol.1: Geometry Subhash Khot IAS Elchanan Mossel UC Berkeley Guy Kindler DIMACS Ryan O’Donnell IAS.
1 COMPOSITION PCP proof by Irit Dinur Presentation by Guy Solomon.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
Ryan O'Donnell (CMU, IAS) Yi Wu (CMU, IBM) Yuan Zhou (CMU)
Correlation Immune Functions and Learning Lisa Hellerstein Polytechnic Institute of NYU Brooklyn, NY Includes joint work with Bernard Rosell (AT&T), Eric.
1 INTRODUCTION NP, NP-hardness Approximation PCP.
Lecture 20: April 12 Introduction to Randomized Algorithms and the Probabilistic Method.
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
Finding Almost-Perfect
Ryan ’Donnell Carnegie Mellon University O. Ryan ’Donnell Carnegie Mellon University.
1 Hardness Result for MAX-3SAT This lecture is given by: Limor Ben Efraim.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
Ryan O’Donnell Carnegie Mellon University. Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s proof of Arrow’s.
Sub-Constant Error Low Degree Test of Almost-Linear Size Dana Moshkovitz Weizmann Institute Ran Raz Weizmann Institute.
Ryan ’Donnell Carnegie Mellon University O. f : {−1, 1} n → {−1, 1} is “quasirandom” iff fixing O(1) input coords changes E[f(x)] by only o (1)
Hardness of Learning Halfspaces with Noise Prasad Raghavendra Advisor Venkatesan Guruswami.
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Yuan Zhou, Ryan O’Donnell Carnegie Mellon University.
1 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones.
Pseudo-random generators Talk for Amnon ’ s seminar.
Why almost all satisfiable k - CNF formulas are easy? Danny Vilenchik Joint work with A. Coja-Oghlan and M. Krivelevich.
CS151 Complexity Theory Lecture 15 May 18, Gap producing reductions Main purpose: –r-approximation algorithm for L 2 distinguishes between f(yes)
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Property Testing (a.k.a. Sublinear Algorithms )
Finding Almost-Perfect
Possibilities and Limitations in Computation
NP-Completeness Yin Tat Lee
Subhash Khot Dept of Computer Science NYU-Courant & Georgia Tech
Subhash Khot Theory Group
Linear sketching with parities
Venkatesan Guruswami Yuan Zhou (Carnegie Mellon University)
Linear sketching over
Introduction to PCP and Hardness of Approximation
Linear sketching with parities
Presentation transcript:

Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)

Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & “Slightly Dictator” Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem.

Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & “Slightly Dictator” Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem

Constraint Satisfaction Problems Let  be a class of predicates (“constraints”) on a few bits; e.g., “ X © Y © Z = b ” “ X  Y ” The “Max-  ” constraint satsifaction problem: Given m predicates/constraints over n variables, find assignment satisfying as many as possible. Max-3Lin Max-Cut Max-2SAT

Approximating CSPs “A is a (c, s)-approximation algorithm for Max-  ”: Given an instance where you can satisfy ¸ c fraction of constraints, A outputs a solution satisfying ¸ s fraction of constraints. A should run in polynomial time.

Approximating CSPs Gaussian Elimination is a (1, 1)-approximation algorithm for Max-3Lin Best known (1 − , s)-approximation for Max-3Lin is a trivial algorithm with s = ½ – output all 0’s or all 1’s. (A (½, ½)-approximation.) Goemans and Williamson ’95 gave a very famous approximation algorithm for Max-Cut, which is a -approximation and also a (c, s)-approximation for every s <.878c. G&W is a (.51,.45)-approximation for Max-Cut, worse than trivial (the Greedy algorithm is a (½, ½)-approximation algorithm) Charikar and Wirth ’04 gave a ( ½ + , ½ +  (  /log(1/  )) )- approximation for Max-Cut. (A “Max-Cut-Gain” algorithm.)

Hardness of approximation PCP (“Probabilistically Checkable Proofs”) technology used to prove NP-hardness of (c,s)-approximation algorithms. Håstad ’97: (1 − , ½ +  )-approximating Max-3Lin is NP-hard. Håstad ’97: (1, 7/8 +  )-approximating Max-3SAT is NP-hard. KKMO ’04 + MOO ’05: Doing any better than the Goemans-Williamson approximation algorithm is NP-hard*. * Assuming the “Unique Games Conjecture”.

Hardness of approximation PCP hardness of approximation rule of thumb: “To prove hardness of (c, s)-approximating Max- , it suffices to give a “(c, s)-Slightly-Dictator-Test” where the test is from .”

Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & “Slightly Dictator” Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem

Dictators We will be considering m-bit boolean functions: Function f is called a “Dictator” if it is projection to one coordinate: for some 1 · i · m. (AKA “Singleton” AKA “Long Code”)

Dictator Testing In the field of “Property Testing”, unknown f given as a black box. Want to determine if f belongs to some class of functions C. Want to query f on as few strings as possible. (Constantly many.) Clearly, must use randomization, must admit some chance of error. For hardness-of-approximation, the relevant C is the class of all m Dictator functions.

Testing Dictators A (non-adaptive) Dictator Test: Picks x 1, …, x q 2 {0,1} m in some random fashion. Picks a ‘predicate’  on q bits. Queries f (x 1 ), …, f (x q ). Says “YES” or “NO” according to  (f (x 1 ), …, f (x q )). Each f : {0,1} m ! {0,1} has some probability of “passing” the test. Hope: probability is large for dictators, and small for non-dictators.

Correlation If f and g are “highly correlated” – i.e., they agree on almost all inputs – then the probability they pass will be essentially the same. So if g is highly correlated with a Dictator, we can’t help but let it pass with high probability. (A number between −1 and 1.)

Basic Dictator Testing If f is a Dictator, passes with probability 1. If f has correlation < 1 −  with every Dictator, passes with probability at most 1 −  (  ). Number of queries q should be an absolute constant. (Like 6 or something.) (Remark 1: Given such a test, you can get a “standard” Dictator Test by repeating O(1/  ) times and saying “YES” iff all tests pass. Remark 2: ) “Assignment tester” (of exponential length) [Din06])

Examples Bellare-Goldreich-Sudan ’95: O(1) queries. Håstad ’97 probably gave a 3-query one (he at least could’ve). A 3-query one; if you know Fourier, proof is easy homework ex.: with probability ½ do the BLR test: pick x, y uniformly, and set z = x © y test that f (x) © f (y) © f (z) = 0 with probability ½ do the NAE test: for each i = 1…m, choose (x i, y i, z i ) uniformly from {0,1} 3 n { (0,0,0), (1,1,1) } test that f (x), f (y), f (z) not all equal x y z

Hardness of approximation PCP hardness of approximation rule of thumb: “To prove hardness of (c, s)-approximating Max- , it suffices to give a “(c, s)-Slightly-Dictator-Test” where the test is from .”

(c, s)-Slightly-Dictator-Tests If f is a Dictator, passes with probability ¸ c. If f has correlation <  with every Dictator (and Dictator-negation), then f passes with probability < s +  0, where  0 ! 0 as  ! 0. (“If f passes with high enough prob., it’s slightly Dictatorial.”) (For PCP purposes, you can sometimes even get away with “Very-Slightly-Dictator-Tests”…)

Talk outline 1.Constraint satisfaction problems and hardness of approximation 2.Dictator Tests & Slightly Dictator Tests 3.A new Slightly Dictator Test and hardness of approximation result for the Max-Cut-Gain problem

Max-Cut Slightly-Dictator-Tests For Max-Cut, you need a 2-query Slightly-Dictator-Test, where the tests are of the form “f (x)  f (y)”. KKMO ’04 proposed the Noise Sensitivity test: Pick x 2 {0,1} m uniformly, form y 2 {0,1} m by flipping each bit independently with probability . Test f (x)  f (y). Theorem (conj’d by KKMO, proved in MOO ’05): This is a ( , arccos(1−2  )/  )-Very-Slightly-Dictator-Test.

Corollaries  = 1 −  : Gives -hardness * for Max-Cut  : Gives ( ,.74)-hardness * for Max-Cut (.878-gap)  = ½ +  : Gives (½ + , ½ + (2/  )  )-hardness * for Max-Cut The first two are best possible, as Goemans and Williamson gave matching algorithms. Last doesn’t match ( ½ + , ½ +  (  /log(1/  )) )- approximation algorithm of Charikar and Wirth. Our goal: give matching hardness.

A new result Subhash Khot and I improved the hardness result to match Charikar and Wirth, by analyzing a new Dictator Test: Do the Noise Sensitivity test some fraction of time with  1, and some fraction of the time with  2, balanced so that Dictators pass w.p. ½ + . Gives a ( ½ + , ½ +  (  /log(1/  )) )-Slightly-Dictator-Test using  tests. Bonuses: It’s a Slightly-Dictator-Test (not Very-Slightly-). Unlike MOO ’05, after doing the usual Fourier analysis stuff, the proof is about 10 lines rather than 10 pages.

Main technical analysis First, rename bits to −1 and 1, rather than 0 and 1. Next, do the usual Fourier analysis stuff… Let f : {−1,1} m ! {−1,1} be any function, and say it has correlation c i with the i th Dictator function, i = 1…m. Let L : {−1,1} m ! R be the function: L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m This gives the linear polynomial over R that f “looks most like”.

Main technical analysis L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m  2 : =  c i 2 (  2 roughly measures how Dictatorial f is.) Probability f : {−1,1} m ! {−1,1} passes the test is (essentially) equal to:

Main technical analysis L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m  2 : =  c i 2 Conclusion: If all correlations c i are small, the distribution of L looks like a Gaussian. With variance =  2.

Gaussian facts The probability that a Gaussian random variable with variance 1 goes above t is about exp(−t 2 / 2). By scaling, the probability that a Gaussian with variance  2 goes above t is about exp(−t 2 / 2  2 ). So the probability that a Gaussian with variance  2 goes above 2 is about exp(−2/  2 ). If  2 ¸ 10/ln(1/  ), we have Pr x [L(x) > 2] ¸  1/5.

Main technical analysis L(x 1, …, x m ) = c 1 ¢ x 1 + c 2 ¢ x 2 + ¢ ¢ ¢ + c m ¢ x m  2 : =  c i 2 If all correlations c i are small, then: If  2 ¸ 10/ln(1/  ), we have Pr x [L(x) > 2] ¸  1/5 ) ( ½ + , ½ +  (  /log(1/  )) )-Slightly-Dictator-Test

Open problem Suppose you want a 3-query (1, s)-(Very)-Slightly-Dictator-Test Till recently, best s was Håstad’s 3/4. Khot & Saket ’06 got s down to 20/27. Conjectured (by Zwick) best s: 5/8 (!). I’m pretty sure I know the test, but I can’t analyze it…

The End