Presentation is loading. Please wait.

Presentation is loading. Please wait.

Vertex-Cover Inapproximability Irit Dinur NEC, Princeton Based on joint work with Muli Safra.

Similar presentations


Presentation on theme: "Vertex-Cover Inapproximability Irit Dinur NEC, Princeton Based on joint work with Muli Safra."— Presentation transcript:

1 Vertex-Cover Inapproximability Irit Dinur NEC, Princeton Based on joint work with Muli Safra

2 Talk Outline Basic PCPBasic PCP Best inapprox results: BGS-Hastad paradigm of composing the Raz-verifier with the long-codeBest inapprox results: BGS-Hastad paradigm of composing the Raz-verifier with the long-code In this work:In this work: Same distinctive Outer/Inner composition structure.Same distinctive Outer/Inner composition structure. New Outer PCP,New Outer PCP, Inner = the biased Long Code,Inner = the biased Long Code, I will describe the Long-Code, then talk about the properties needed from the outer verifier, for a construction to work with the LC.I will describe the Long-Code, then talk about the properties needed from the outer verifier, for a construction to work with the LC.

3 Vertex Cover Vertex Cover: Given a graph G, find a smallest set of vertices that touches all edges.Vertex Cover: Given a graph G, find a smallest set of vertices that touches all edges. The complement of a (minimum) vertex cover is a (maximum) independent set.The complement of a (minimum) vertex cover is a (maximum) independent set. Best algorithm, approximates VC within 2-o(1) [BYE, MS, Hal]Best algorithm, approximates VC within 2-o(1) [BYE, MS, Hal] Best previous hardness, within 7/6 [Hastad]Best previous hardness, within 7/6 [Hastad] This work, we show hardness of 1.36This work, we show hardness of 1.36 How far can PCP techniques take us? (do they always suffice for optimal hardness)How far can PCP techniques take us? (do they always suffice for optimal hardness)

4 Basic PCP (outer verifier) There are all kinds of PCP theorems, with various properties of the local-tests, e.g. range of variables, number of variables, size of , etc.There are all kinds of PCP theorems, with various properties of the local-tests, e.g. range of variables, number of variables, size of , etc. y 1,y 2,y 3,y 4,y 5,y 6,…,y m  = (y 1 v y 13 v y 2 ) (y 15 v y 19 v y 29 ) (y 22 v y 13 v y 21 ) … (y 4 v y 31 v y 24 ) Given , i t is NP-hard to distinguish between 9 satisfying assignment 9 satisfying assignment No assignment satisfies more than 99%. No assignment satisfies more than 99%.  =  1 (y 1,y 4 ),  2 (y 1,y 2 ),  3 (y 2,y 4 ),  4 (y 3,y 1 ),  5 (y 1,y 6 ),  6 (y 6,y 2 )

5 Brief History [AS, ALMSS]: Basic PCP Thm – VertexCover, MaxCUT etc. NP-hard to approx to within some constant.[AS, ALMSS]: Basic PCP Thm – VertexCover, MaxCUT etc. NP-hard to approx to within some constant. […,BGLR,FK,BS]: Better reductions with explicit constants[…,BGLR,FK,BS]: Better reductions with explicit constants [ BGS ‘95 ]: Introduced the Long-Code.[ BGS ‘95 ]: Introduced the Long-Code. e.g. for VertexCover: 1.068, for Max-CUT: 1.014 [Håstad 96-97 ]: Clique is NP-hard to approximate to within n 1-  ; Optimal gap for 3-SAT and for Linear equations.[Håstad 96-97 ]: Clique is NP-hard to approximate to within n 1-  ; Optimal gap for 3-SAT and for Linear equations. Using Fourier analysis of the “marvelous” Long-Code, and a Stronger PCP [Raz ‘95 ]Using Fourier analysis of the “marvelous” Long-Code, and a Stronger PCP [Raz ‘95 ] Hardness factor for VertexCover : 1.166, for Max-CUT: 1.062 This work [DS 02 ] : hardness factor for VertexCover : 1.36This work [DS 02 ] : hardness factor for VertexCover : 1.36

6 This Work 1.The Biased Long-Code: a generalization of the Long-Code. New techniques for analysis of the Long Code relying on tools from analysis and combinatorics. 2.New PCP constraint system

7 Starting Point: the PCP theoremStarting Point: the PCP theorem 1.Enhance it 2.Compose with the Long-Code The hardest part of these works is the interplay combining these two parts PCPEnhancedPCPLong-Code Common Composition Structure

8 Hardness for VC We want to construct a graph G from , s.t. by the PCP theorem it is NP-hard to distinguish between the two cases, and to (1+c)-approximate Vertex Cover. Given , i t is NP-hard to distinguish between 9 satisfying assignment 9 satisfying assignment No assignment satisfies more than 99%. No assignment satisfies more than 99%. VC(G) > (1+c)k  is SAT VC(G)=k  Is far from SAT

9 Moreover, we encode a satisfying assignment for  into a small vertex-cover for GMoreover, we encode a satisfying assignment for  into a small vertex-cover for G And decode every “almost” small vertex-cover for G into an “almost” satisfying assignment for .And decode every “almost” small vertex-cover for G into an “almost” satisfying assignment for . In standard coding theory, we encode n bits by m bits (m>n), and are able to recover “somewhat corrupt” codewords.In standard coding theory, we encode n bits by m bits (m>n), and are able to recover “somewhat corrupt” codewords. In our setting, we can decode any “small enough” vertex cover in G into an assignment for .In our setting, we can decode any “small enough” vertex cover in G into an assignment for . “Encoding / Decoding”

10 Construction: Loose Outline The H component is some “gadget”, constructed via the long-code (the inner verifier)The H component is some “gadget”, constructed via the long-code (the inner verifier) The “skeleton” of the graph is a special PCP system (the outer verifier)The “skeleton” of the graph is a special PCP system (the outer verifier) Properties we want from this graph G: A satisfying assignment translates to a vertex cover of size kA satisfying assignment translates to a vertex cover of size k A Vertex Cover for G of size <k(1+c) translates to an “almost” satisfying assignmentA Vertex Cover for G of size <k(1+c) translates to an “almost” satisfying assignment y1y1y1y1 y2y2y2y2 y3y3y3y3 ymymymym  =  1 (y 1,y 4 ),  2 (y 1,y 2 ),  3 (y 2,y 4 ),  4 (y 3,y 1 ),  5 (y 1,y 6 ),  6 (y 6,y 2 )

11 Vertex Cover We want a (small) vertex cover in G to correspond to anWe want a (small) vertex cover in G to correspond to an (1) assignment that is (2) satisfying A Vertex Cover for the graph is a vertex cover in each H.A Vertex Cover for the graph is a vertex cover in each H. Decode each small vertex cover in H into a value for the underlying y variable.Decode each small vertex cover in H into a value for the underlying y variable. Combinatorial Question: Construct a graph H s.t. any small vertex cover for H roughly corresponds to a single value in the range {1,2,..,R}.Combinatorial Question: Construct a graph H s.t. any small vertex cover for H roughly corresponds to a single value in the range {1,2,..,R}.

12 Lemma: Given {1,..,R}, we can construct a graph H=H(R) s.t.Lemma: Given {1,..,R}, we can construct a graph H=H(R) s.t. 1.(encoding) Each value in {1,2,..,R} corresponds to a vertex-cover for H, consisting of ~1/2 of the vertices. 2.(decoding) Every vertex cover for H of size <1-  still corresponds to some constant number of values in {1,2,..,R}. Technique:Technique: Biased Long-Code,Biased Long-Code, Analysis of influence of variables on Boolean functions,Analysis of influence of variables on Boolean functions, Erdos-Ko-Rado theorems on intersecting families of subsets.Erdos-Ko-Rado theorems on intersecting families of subsets. Encoding a value by Vertex Covers 2/38/9 one value in {1,..,R}

13 Long-Code of R R elements, can be most concisely R elements, can be most concisely encoded by log R bits. encoded by log R bits. Seeking redundancy properties: we use Seeking redundancy properties: we use many more bits in the encoding. many more bits in the encoding. The Long-Code is the most redundant The Long-Code is the most redundant way, using 2 R bits. way, using 2 R bits.

14 One bit for every subset of [R]One bit for every subset of [R] Long-Code of R, LC:[R]  {0,1} 2 R 12R...

15 One bit for every subset of [R]One bit for every subset of [R] How do we encode the element i  [R]?How do we encode the element i  [R]? (What’s the value of LC(i)?) Long-Code of R, LC:[R]  {0,1} 2 R 0011 1 12R...

16 Endow the bits with the product distribution: For each subset F,p(F) = p|F|(1-p)|R\F| (if p=0.5 this is the regular Long-Code; we take p<0.5) Roughly: take only subsets whose size is pR. The p-Biased Long-Code

17 The Disjointness Graph of the Biased Long-Code

18 12... R What is a codeword?

19 12... R

20

21 A codeword is a vertex coverA codeword is a vertex cover The complement of a vertex-cover is always an independent set.The complement of a vertex-cover is always an independent set. Minimum vertex-cover  Maximum independent setMinimum vertex-cover  Maximum independent set Claim: a long-code codeword, i.e. {all subsets (not) containing i} partitions H into a largest independent set, and its complement, a smallest vertex cover.Claim: a long-code codeword, i.e. {all subsets (not) containing i} partitions H into a largest independent set, and its complement, a smallest vertex cover. Maximal Intersecting Families of Subsets: [Erdös-Ko-Rado ’61]Maximal Intersecting Families of Subsets: [Erdös-Ko-Rado ’61] Lemma: The  p size of an intersecting family is  pLemma: The  p size of an intersecting family is  p (trivial for p=0.5, otherwise proven using “shadows” [Kruskal `63, Katona `68]) (trivial for p=0.5, otherwise proven using “shadows” [Kruskal `63, Katona `68]) i 2 R VC(H)=1-p = 1/2 i 1,..,i k 2 R VC(H) < 1- 

22 Given some IS/VC partition…Given some IS/VC partition… It can be viewed as a truth table of a Boolean function on R variables.It can be viewed as a truth table of a Boolean function on R variables. We can prove, using Friedgut’s theorem, that this function is roughly a Junta (it must have low average-sensitivity for a perturbed p value because it is monotone)We can prove, using Friedgut’s theorem, that this function is roughly a Junta (it must have low average-sensitivity for a perturbed p value because it is monotone) This means that only const elements determine if a subset is in the VC or notThis means that only const elements determine if a subset is in the VC or not Thus, whenever VC < 1- , it is really a function of only a few elements i 1,..,i k 2 [R], i.e. a combination (e.g. union) of their VC encodings.Thus, whenever VC < 1- , it is really a function of only a few elements i 1,..,i k 2 [R], i.e. a combination (e.g. union) of their VC encodings. Moreover, if p<1/3 and the VC has size < 1-p 2, the Junta has a special structure, highlighting one single value i 2 [R].Moreover, if p<1/3 and the VC has size < 1-p 2, the Junta has a special structure, highlighting one single value i 2 [R]. Using: the complete characterization of maximal intersecting families by Ahlswede and Khachatrian ’97.Using: the complete characterization of maximal intersecting families by Ahlswede and Khachatrian ’97. i 2 R VC(H)=1-p = 1/2 i 1,..,i K 2 R VC(H) < 1- 

23 We constructed the disjointness graph of the biased long code, and “showed” that 1.Each value in {1,2,..,R} corresponds to a small vertex cover for H (i.e. of size k). 2.Every vertex cover for H, if smaller than (4/3)¢k roughly corresponds to a single value in {1,2,..,R}. Now we can plug it into the whole construction… i 2 R VC(H)=1-p = 2/3 i 2 R VC(H) < (1-p 2 ) = 8/9

24 Vertex Cover We want a (small) vertex cover in G to correspond to anWe want a (small) vertex cover in G to correspond to an (1) assignment that is (2) satisfying (1) is accomplished by a lemma like the above(1) is accomplished by a lemma like the above Achieving (2) entails representing the PCP constraints by red graph edges (between H components).Achieving (2) entails representing the PCP constraints by red graph edges (between H components).

25 The lemma gives a correspondence between values in [R] and small vertex covers in H.The lemma gives a correspondence between values in [R] and small vertex covers in H. Next step: consider two copies of H, representing two y variables with a constraint between them.Next step: consider two copies of H, representing two y variables with a constraint between them. Expressing a local-constraint

26 Two copies of H, representing variables y 1 and y 2Two copies of H, representing variables y 1 and y 2 Add red edges so that “consistent” pairs of VCs will always cover the red edges too. (thus there is no freedom in choosing the red edges)Add red edges so that “consistent” pairs of VCs will always cover the red edges too. (thus there is no freedom in choosing the red edges) Limitations- the red edges can themselves be covered “cheaply”…Limitations- the red edges can themselves be covered “cheaply”… We want to ensure that a semi-small VC corresponds to an assignment to y 1,y 2 that satisfies the constraint.We want to ensure that a semi-small VC corresponds to an assignment to y 1,y 2 that satisfies the constraint. For this,For this, either the VCs are required to be rather large (no gap)either the VCs are required to be rather large (no gap) or the constraint must have high “uniqueness”or the constraint must have high “uniqueness”

27 Uniqueness y1y1y1y1 y2y2y2y2 1 2 3 R 1 2 3 R

28 Uniqueness y1y1y1y1 y2y2y2y2 1 2 3 R 1 2 3 R

29 Uniqueness y1y1y1y1 y2y2y2y2 1 2 3 R 1 2 3 R

30 “High Uniqueness” PCP For every h>1,  there is a system of tests  with “high uniqueness” s.t. it is NP- hard to distinguish between 1. There is an assignment satisfying 1-  of the constraints 2. Every assignment to at least  of the variables, must contain h variables that are pairwise inconsistent.

31 Going Back to the Construction y1y1y1y1 y2y2y2y2 y3y3y3y3 ymymymym  =  1 (y 1,y 4 ),  2 (y 1,y 2 ),  3 (y 2,y 4 ),  4 (y 3,y 1 ),  5 (y 1,y 6 ),  6 (y 6,y 2 ) A satisfying assignment can be encoded into a small vertex cover A satisfying assignment can be encoded into a small vertex cover A semi-small vertex cover can be decoded into a satisfying-assignment. (an assignment to  variables without an h- sized clique of inconsistency) A semi-small vertex cover can be decoded into a satisfying-assignment. (an assignment to  variables without an h- sized clique of inconsistency)

32 2½ parts 1. Construct H 2. Construct PCP with high “uniqueness” 3. Combine the two

33 How to get factor 2? 1. Get a better PCP system with higher uniqueness 2. New ways of combining the Long- Code into the soundness proof.


Download ppt "Vertex-Cover Inapproximability Irit Dinur NEC, Princeton Based on joint work with Muli Safra."

Similar presentations


Ads by Google