Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)

Similar presentations


Presentation on theme: "1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)"— Presentation transcript:

1 1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes gomes@cs.cornell.edu Module: Satisfiability (Reading R&N: Chapter 7)

2 2 Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof = a sequence of inference rule applications Can use inference rules as operators in a standard search algorithm Different types of proofs –Model checking truth table enumeration (always exponential in n) improved backtracking, e.g., Davis--Putnam-Logemann-Loveland (DPLL) (including some inference rules) heuristic search in model space (sound but incomplete) e.g., min-conflicts-like hill-climbing algorithms Intro to logic Current module Previous module

3 3 Satisfiability

4 Propositional Satisfiability problem Satifiability (SAT): Given a formula in propositional calculus, is there a model (i.e., a satisfying interpretation, an assignment to its variables) making it true? We consider clausal form, e.g.: ( a   b   c ) AND ( b   c) AND ( a  c) possible assignments SAT: prototypical hard combinatorial search and reasoning problem. Problem is NP-Complete. (Cook 1971) Surprising “power” of SAT for encoding computational problems.

5 5 Satisfiability as an Encoding Language

6 Encoding Latin Square Problems in Propositional Logic Variables: Each variables represents a color assigned to a cell. Clauses: Some color must be assigned to each cell No color is repeated in the same row No color is repeated in the same column (example Row i Color k) (example Colum j Color k) (clause of length n); n 2 (sets of negative binary clauses); n(n-1)/2 n n (sets of negative binary clauses); n(n-1)/2 n n

7 3D Encoding or Full Encoding This encoding is based on the cubic representation of the quasigroup: each line of the cube contains exactly one true variable; Variables: Same as 2D encoding. Clauses: Same as the 2 D encoding plus: –Each color must appear at least once in each row; –Each color must appear at least once in each column; –No two colors are assigned to the same cell;

8 8 Dimacs format At the top of the file is a simple header. p cnf Each variable should be assigned an integer index. Start at 1, as 0 is used to indicate the end of a clause. The positive integer a positive literal, whereas a negative interger represents a negative literal. Example -1 7 0  (  x1  x7)

9 9 Extended Latin Square 2x2 p cnf 8 24 -1 -2 0 -3 -4 0 -5 -6 0 -7 -8 0 -1 -5 0 -2 -6 0 -3 -7 0 -4 -8 0 -1 -3 0 -2 -4 0 -5 -7 0 -6 -8 0 1 2 0 3 4 0 5 6 0 7 8 0 1 5 0 2 6 0 3 7 0 4 8 0 1 3 0 2 4 0 5 7 0 6 8 0 order 2 -1 -1 1/2 3/4 5/6 7/8 A cell gets at most a colorNo repetition of color in a column No repetition of color in a rowA cell gets a color A given color goes in each column A given color goes in each row 1 – cell 11 is red 2 – cell 11 is green 3 – cell 12 is red 4 – cell 12 is green 5 – cell 21 is red 6 – cell 21 is green 7 – cell 22 is red 8 – cell 22 is green

10 10 Significant progress in Satisfiability Methods Software and hardware verification – complete methods are critical - e.g. for verifying the correctness of chip design, using SAT encodings Current methods can verify automatically the correctness of > 1/7 of a Pentium IV. Going from 50 variable, 200 constraints to 1,000,000 variables and 5,000,000 constraints in the last 10 years Applications: Hardware and Software Verification Planning, Protocol Design, etc.

11 11 Model Checking

12 Turing Award Source: Slashdot

13 13 A “real world” example

14 i.e. ((not x 1 ) or x 7 ) and ((not x 1 ) or x 6 ) and … etc. Bounded Model Checking instance:

15 (x 177 or x 169 or x 161 or x 153 … or x 17 or x 9 or x 1 or (not x 185 )) clauses / constraints are getting more interesting… 10 pages later: …

16 16 4000 pages later: … !!! a 59-cnf clause…

17 Finally, 15,000 pages later: MiniSAT solver solves this instance in less than one minute. Note that:… !!!

18 18 Effective propositional inference

19 19 Effective propositional inference Two families of algorithms for propositional inference (checking satisfiability) based on model checking (which are quite effective in practice): Complete backtracking search algorithms DPLL algorithm (Davis, Putnam, Logemann, Loveland) Incomplete local search algorithms –WalkSAT algorithm

20 20 The DPLL algorithm Determine if an input propositional logic sentence (in CNF) is satisfiable. Improvements over truth table enumeration: 1.Early termination A clause is true if any of its literals is true. A sentence is false if any clause is false. 2.Pure symbol heuristic Pure symbol: always appears with the same "sign" in all clauses. e.g., In the three clauses (A   B), (  B   C), (C  A), A and B are pure, C is impure. Make a pure symbol literal true. 3.Unit clause heuristic Unit clause: only one literal in the clause The only literal in a unit clause must be true.

21 The DPLL algorithm Early terminationPure SymbolUnit Propagation Branch, recursive call

22 22 DPLL Basic algorithm for state-of-the-art SAT solvers;; Several enhancements: - data structures; - clause learning; - randomization and restarts; Check: http://www.satlive.org/

23 23 Learning in Sat

24 24 The WalkSAT algorithm Incomplete, local search algorithm Evaluation function: The min-conflict heuristic of minimizing the number of unsatisfied clauses Balance between greediness and randomness

25 25 The WalkSAT algorithm

26 26 http://www.satlive.org/ Lots of solvers and information about SAT, theory and practice:

27 27 Computational Complexity of SAT How does an algorithm scale? Standard Algorithmic Approach: Too Pessimistic. Ideal Approach, but… What distribution? Analyzable Realistic Spectrum of hardness

28 28 SAT Complexity NP-Complete - worst-case complexity –(2 n possible assignments) “Average” Case Complexity (I) –Constant Probability Model – Goldberg 79; Goldberg et al 82 N variables; L clauses p - fixed probability of a variable in a clause (literals: 0.5 +/-) (i.e., average clause length is pN) Eliminate empty and unit clauses Empirically, on average, SAT can be easily solved - O(n 2 ) Key problem: easy distribution; random guesses find a solution in a constant number of tries Franco 86; Franco and Ho 88

29 29 Hard satisfiability problems Consider random 3-CNF sentences. e.g., (  D   B  C)  (B   A   C)  (  C   B  E)  (E   D  B)  (B  E   C) m = number of clauses n = number of symbols

30 30 SAT Complexity “Average” Case Complexity (II) –Fixed-clause Length Model – Random K-SAT Franco 86; –N variables; L clauses; K number of literals per clause Randomly choose a set of K variables per clause (literals: 0.5 +/-) –Expected time – O(2 n ) Can we provide a finer characterization beyond worst-case results? Typical Case Analysis

31 31 Typical-Case Complexity Typical-case complexity: a more detailed picture –Characterization of the spectrum of hardness of instances as we vary certain interesting instance parameters e.g. for SAT: clause-to-variable ratio. –Are some regimes easier than others? –What about a majority of the instances?

32 Selman et al. 92,96 Typical Case Analysis: 3 SAT All clauses have 3 literals Median Runtime

33 Hard problems seem to cluster near m/n = 4.3 (critical point) Median

34 Intuition At low ratios: –few clauses (constraints) –many assignments –easily found At high ratios: –many clauses –inconsistencies easily detected

35


Download ppt "1 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Satisfiability (Reading R&N: Chapter 7)"

Similar presentations


Ads by Google