The satisfiability threshold and clusters of solutions in the 3-SAT problem Elitza Maneva IBM Almaden Research Center.

Slides:



Advertisements
Similar presentations
10/7/2014 Constrainedness of Search Toby Walsh NICTA and UNSW
Advertisements

Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Constrained Approximate Maximum Entropy Learning (CAMEL) Varun Ganapathi, David Vickrey, John Duchi, Daphne Koller Stanford University TexPoint fonts used.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Mar, 4, 2015 Slide credit: some slides adapted from Stuart.
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
Daniel Kroening and Ofer Strichman 1 Decision Procedures An Algorithmic Point of View SAT.
1/30 SAT Solver Changki PSWLAB SAT Solver Daniel Kroening, Ofer Strichman.
Generating Hard Satisfiability Problems1 Bart Selman, David Mitchell, Hector J. Levesque Presented by Xiaoxin Yin.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Survey Propagation Algorithm
Markov Chains & Randomized algorithms BY HAITHAM FALLATAH FOR COMP4804.
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
Approximate Counting via Correlation Decay Pinyan Lu Microsoft Research.
Why almost all k-colorable graphs are easy A. Coja-Oghlan, M. Krivelevich, D. Vilenchik.
Message Passing on Planted Models: What do we know? Why do we care? Elchanan Mossel Joint works with: 1. Uri Feige and Danny Vilenchik 2. Amin Coja-Oghlan.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
A Linear Round Lower Bound for Lovasz-Schrijver SDP relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani UC Berkeley.
Semidefinite Programming
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Algorithms in Exponential Time. Outline Backtracking Local Search Randomization: Reducing to a Polynomial-Time Case Randomization: Permuting the Evaluation.
Phase Transitions of PP-Complete Satisfiability Problems D. Bailey, V. Dalmau, Ph.G. Kolaitis Computer Science Department UC Santa Cruz.
The Connectivity of Boolean Satisfiability: Structural and Computational Dichotomies Elitza Maneva (UC Berkeley) Joint work with Parikshit Gopalan, Phokion.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
Belief Propagation, Junction Trees, and Factor Graphs
Methods of Proof Chapter 7, second half.
It’s all about the support: a new perspective on the satisfiability problem Danny Vilenchik.
1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)
Stochastic greedy local search Chapter 7 ICS-275 Spring 2007.
1 Paul Beame University of Washington Phase Transitions in Proof Complexity and Satisfiability Search Dimitris Achlioptas Michael Molloy Microsoft Research.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 2 Ryan Kinworthy CSCE Advanced Constraint Processing.
1 Message Passing and Local Heuristics as Decimation Strategies for Satisfiability Lukas Kroc, Ashish Sabharwal, Bart Selman (presented by Sebastian Brand)
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Some Surprises in the Theory of Generalized Belief Propagation Jonathan Yedidia Mitsubishi Electric Research Labs (MERL) Collaborators: Bill Freeman (MIT)
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
CS774. Markov Random Field : Theory and Application Lecture 08 Kyomin Jung KAIST Sep
Proving Non-Reconstruction on Trees by an Iterative Algorithm Elitza Maneva University of Barcelona joint work with N. Bhatnagar, Hebrew University.
An Algorithmic Proof of the Lopsided Lovasz Local Lemma Nick Harvey University of British Columbia Jan Vondrak IBM Almaden TexPoint fonts used in EMF.
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module 3 Logic Representations (Part 2)
On a random walk strategy for the Q2SAT problem K. Subramani.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Survey Propagation. Outline Survey Propagation: an algorithm for satisfiability 1 – Warning Propagation – Belief Propagation – Survey Propagation Survey.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 7th~10th Belief propagation Kazuyuki Tanaka Graduate School of Information Sciences,
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
First-Order Logic and Inductive Logic Programming.
1 Mean Field and Variational Methods finishing off Graphical Models – Carlos Guestrin Carnegie Mellon University November 5 th, 2008 Readings: K&F:
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2009.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Oct, 30, 2015 Slide credit: some slides adapted from Stuart.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
Heuristics for Efficient SAT Solving As implemented in GRASP, Chaff and GSAT.
NPC.
Why almost all satisfiable k - CNF formulas are easy? Danny Vilenchik Joint work with A. Coja-Oghlan and M. Krivelevich.
Review Law of averages, expected value and standard error, normal approximation, surveys and sampling.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Sequential Algorithms for Generating Random Graphs
First-Order Logic and Inductive Logic Programming
Graduate School of Information Sciences, Tohoku University
Physical Fluctuomatics 7th~10th Belief propagation
Provably hard problems below the satisfiability threshold
Mean Field and Variational Methods Loopy Belief Propagation
Presentation transcript:

The satisfiability threshold and clusters of solutions in the 3-SAT problem Elitza Maneva IBM Almaden Research Center

3-SAT Variables: x 1, x 2, …, x n take values {TRUE, FALSE} Constraints: (x 1 or x 2 or not x 3 ), (not x 2 or x 4 or not x 6 ), … (x 1  x 2  x 3 )  ( x 2  x 4  x 6 )  … x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 ___

 PLR PLR Random walk Random walk Belief propagation Belief propagation Survey propagation Survey propagationNotsatisfiable Satisfiable Satisfiable Notsatisfiable Random 3-SAT Myopic Myopic x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 n m =  n Red = proved, green = unproved

Rigorous bounds for random 3-SAT 1999: [Friedgut] there is a sharp threshold of satisfiability  c (n) 2002 Kaporis Kirousis Lalas 2002 Hajiaghayi Sorkin 3.52

 Rigorous bounds for random 3-SAT Pure Literal Rule Algorithm: If any variable appears only positive or only negative assign it 1 or 0 respectively Simplify the formula by removing the satisfied clauses Repeat (x 1  x 2  x 3 )  ( x 2  x 4  x 5 )  (x 1  x 2  x 4 )  (x 3  x 4  x 5 ) 1 1 _____ 0 1

 Rigorous bounds for random 3-SAT Myopic Algorithms: Choose a variable according to # positive and negative occurrences Assign the variable the more popular value Simplify the formula by 1. removing the satisfied clauses 2. removing the FALSE literals 3. assigning variables in unit clauses 4. assigning pure variables Repeat Best rule: maximum |# positive occurr. – # negative occurr.|

 Rigorous bounds for random 3-SAT E [# solutions] = 2 n  Pr [00…0 is a solution] = = 2 n  (1-1/8) m = = (2  (7/8)  ) n For  >5.191, E [# solutions]  0, so Pr [satisfiable] 

 Rigorous bounds for random 3-SAT E [# positively prime solutions]  0 Positively prime solution: a solution in which no variable assigned 1 can be converted to 0. Fact: If there exists a solution, there exists a positively prime solution

 Rigorous bounds for random 3-SAT E [# symmetrically prime solutions] 

 PLR PLR Random walk Random walk Belief propagation Belief propagation Survey propagation Survey propagation Satisfiable Random 3-SAT Myopic Myopic x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 n m =  n Red = proved, green = unproved

Random Walk Algorithms [Alekhnovich, Ben-Sasson `03] Simple Random Walk: Pick an unsatisfied clause Pick a variable in the clause Flip the variable Theorem: Finds a solution in O(n) steps for  < [Seitz, Alava, Orponen `05] [Ardelius, Aurell `06] ASAT: Pick an unsatisfied clause Pick a variable in the clause Flip it only with prob. p if number of unsatisfied clauses does not increase Experiment: Takes O(n) steps for  < 4.21.

 PLR PLR Random walk Random walk Belief propagation Belief propagation Survey propagation Survey propagation Satisfiable Random 3-SAT Myopic Myopic x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 n m =  n Red = proved, green = unproved

We can find solutions via inference Suppose the formula is satisfiable. Consider the uniform distribution over satisfying assignments: over satisfying assignments: Pr[x 1, x 2, …, x n ]   (x 1, x 2, …, x n ) Pr[x 1, x 2, …, x n ]   (x 1, x 2, …, x n ) Simple Claim: If we can compute Pr[x i =1], then we can find a solution fast. Decimation: Assign variables one by one to a value that has highest probability.

Fact: We cannot hope to compute Pr[x i =1] exactly Heuristics for guessing the best variable to assign: 1.Pure Literal Rule (PLR): Choose a variable that appears always positive / always negative. 2. Myopic Rule: Choose a variable based on number of positive and negative occurrences. 3. Belief Propagation: Estimate Pr[x i =1] by belief propagation and choose variable with largest estimated bias.

Computing Pr[x 1 =0] on a tree formula x1x #Solutions with 0 #Solutions with 1 #Solns with 0 #Solns with 1

Vectors can be normalized x1x

… and thought of as messages x1x1 Vectors can be normalized

What if the graph is not a tree? Belief propagation

x 11 x5x5 x1x1 x4x4 x 10 x6x6 x9x9 x8x8 x7x7 x3x3 x2x2 Pr[x 1, …, x n ]  Π a  a (x N( a ) )  (x 1, x 2, x 3 )

Belief Propagation [Pearl ’88] x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 n m Given: Pr[x 1 …x 7 ]   a (x 1, x 3 )   b (x 1, x 2 )   c (x 1, x 4 )  … Goal: Compute Pr[x 1 ] (i.e. marginal) Message passing rules: M i  c (x i ) = Π M b  i (x i ) M c  i (x i ) = Σ  c (x N(c) )  Π M j  c (x j ) Estimated marginals:  i (x i ) = Π M c  i (x i ) x j : j  N( c )\ij  N( c )\i c  N(i) b  N(i)/ c i.e. Markov Random Field (MRF) Belief propagation is a dynamic programming algorithm. It is exact only when the recurrence relation holds, i.e.: 1.if the graph is a tree. 2.if the graph behaves like a tree: large cycles

Applications of belief propagation Statistical learning theoryStatistical learning theory VisionVision Error-correcting codes (Turbo, LDPC, LT)Error-correcting codes (Turbo, LDPC, LT) Lossy data-compressionLossy data-compression Computational biologyComputational biology Sensor networksSensor networks

 PLR PLR Random walks Random walks Belief propagation Belief propagation Survey propagation Survey propagation Satisfiable Limitations of BP Myopic Myopic x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 n m =  n

Reason for failure of Belief Propagation Messages from different neighbors are assumed to be almost independent  i.e. there are no long-range correlationsMessages from different neighbors are assumed to be almost independent  i.e. there are no long-range correlations  PLR PLR Random walks Random walks Belief propagation Belief propagation Survey propagation Survey propagation Myopic Myopic No long-range correlations Long-range correlations exist

Reason for failure of Belief Propagation Messages from different neighbors are assumed to be almost independent  i.e. there are no long-range correlationsMessages from different neighbors are assumed to be almost independent  i.e. there are no long-range correlations Fix: 1-step Replica Symmetry Breaking Ansatz The distribution can be decomposed into “phases”The distribution can be decomposed into “phases” There are no long-range correlations within a phaseThere are no long-range correlations within a phase Each phase consists of similar assignments – “clusters”Each phase consists of similar assignments – “clusters” Messages become distributions of distributionsMessages become distributions of distributions An approximation yields 3-dimensional messages:An approximation yields 3-dimensional messages: Survey Propagation [Mezard, Parisi, Zecchina ‘02] Survey propagation finds a phase, then WalkSAT is used to find a solution in the phaseSurvey propagation finds a phase, then WalkSAT is used to find a solution in the phase

Reason for failure of Belief Propagation Messages from different neighbors are assumed to be almost independent  i.e. there are no long-range correlationsMessages from different neighbors are assumed to be almost independent  i.e. there are no long-range correlations Fix: 1-step Replica Symmetry Breaking Ansatz The distribution can be decomposed into “phases”The distribution can be decomposed into “phases” Pr[x 1, x 2, …, x n ] =   p   Pr  [x 1, x 2, …, x n ] Pr[x 1, x 2, …, x n ] =   p   Pr  [x 1, x 2, …, x n ]

fixed variables

Space of solutions Satisfying assignments in {0, 1} n 01  1  0   10  11   phases

Survey propagation 

M c  i =  ———————— M u i  c = (1-  (1- M b  i ))  (1-M b  i ) M s i  c = (1-  (1- M b  i ))  (1-M b  i ) M  i  c =  (1- M b  i ) M u j  c M u j  c +M s j  c +M  j  c j  N(c)\i b  N s a (i)b  N u a (i) b  N s c (i)b  N u c (i) b  N(i)\c x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 You have to satisfy me with prob. 60% I’m 0 with prob 10%, 1 with prob 70%, whichever (i.e.  ) 20%

Combinatorial interpretation Can survey propagation be thought of as inference on cluster assignments? Not precisely, but close. We define a related concept of core/cover assignments Assignments in the same cluster share the same core However, different cluster may have the same core

Finding the core of a solution

unconstrained variables

Finding the core of a solution  1 0 0

 1  0

   0

    Such a fully constrained partial assignment is called a cover.

Partial assignments {0,1,  } n {0, 1} n assignments 01  1  0   10  11    # stars core core Extending the space of assignments

Theorem: Survey propagation is equivalent to belief propagation on the uniform distribution over cover assignments. Survey propagation is a belief propagation algorithm [Maneva, Mossel, Wainwright ‘05] [Braunstein, Zecchina ‘05] But, we still need to look at all partial assignments.

Peeling Experiment for 3-SAT, n =10 5 

Clusters and partial assignments  Partial assignments {0, 1} n assignments # stars  01  1  0   10  11  

01  n()n()n()n()  3. A family of belief propagation algorithms: 0 1  Vanilla BP SP Pr[  ]   (1-  ) n()n()n()n() no()no()no()no() Definition of the new distribution Formula 2. Weight of partial assignments: no()no()no()no() 1. Includes all assignments without contradictions or implications

Partial assignments {0, 1} n assignments 01  1  0   10  11    # stars core core  =0  =1 Pr[  ]   (1-  ) n()n()n()n() no()no()no()no() 0 1  Vanilla BP SP This is the correct picture for 9-SAT and above. [Achlioptas, Ricci-Tersenghi ‘06]

Clustering for k-SAT What is known? 2-SAT: a single cluster 3-SAT to 7-SAT: not known 8-SAT and above: exponential number of clusters (with second moment method) [Mezard, Mora, Zecchina `05] [Mezard, Mora, Zecchina `05] [Achlioptas, Ricci-Tersenghi `06] [Achlioptas, Ricci-Tersenghi `06] 9-SAT and above: clusters have non-trivial cores (with differential equations method) [Achlioptas, Ricci-Tersenghi `06] [Achlioptas, Ricci-Tersenghi `06]

 1111    11  11111111 1  11111111 1111  1   1  01     0 Convex geometry / Antimatroid Total weight is 1 for every 

 Rigorous bounds for random 3-SAT E [total weight of partial assignments]  0 (  = 0.8) Fact: If there exists a solution, the weight of partial assignments is at least

Rigorous bounds for random 3-SAT Theorem [ Maneva, Sinclair ] For  > one of the following holds: 1.there are no satisfying assignments with high probability; 2.the core of every satisfying assignment is ( , ,…,  ) 

 PLR PLR Random walk Random walk Belief propagation Belief propagation Survey propagation Survey propagation Satisfiable Random 3-SAT Myopic Myopic x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 n m =  n Red = proved, green = unproved

Challenges Improve the bounds on the threshold Prove algorithms work with high probability Find an algorithm for certifying that a formula with  n clauses for large  has no solution

Thank you