Intelligence Artificial Intelligence Part I :Depth first search for SAT Part II: Davis-Putnam Algorithm Part III: Heuristics for SAT Search 4
3 Search: the story so far zExample Search problems, SAT, TSP, Games zSearch states, Search trees zDon’t store whole search trees, just the frontier zDepth first, breadth first, iterative deepening zBest First zHeuristics for Eights Puzzle zA*, Branch & Bound
4 Example Search Problem: SAT zWe need to define problems and solutions zPropositional Satisfiability (SAT) yreally a logical problem -- I’ll present as a letters game zProblem is a list of words ycontains upper and lower case letters (order unimportant) ye.g. ABC, ABc, AbC, Abc, aBC, abC, abc zSolution is choice of upper/lower case letter yone choice per letter yeach word to contain at least one of our choices ye.g. AbC is unique solution to above problem.
5 Example Search Problem: SAT zWe need to define problems and solutions zPropositional Satisfiability (SAT) yNow present it as a logical problem zProblem is a list of clauses ycontains literals xeach literal a positive or negative variable xliterals are e.g. +A, -B, +C, …. zSolution is choice of true or false for each variable yone choice per letter yeach clause to contain at least one of our choices yI.e. +A matches A = true, -A matches A = false
6 It’s the same thing zVariables = letters zliteral = upper or lower case letter zPositive = True = Upper case zNegative = False = Lower case zclause = word zproblem = problem zI reserve the right to use either or both versions confusingly
7 Depth First Search for SAT zWhat heuristics should we use? zWe need two kinds yvariable ordering xe.g. set A before B yvalue ordering xe.g. set True before False zIn Eights, only need value yvariable ordering irrelevant zIn SAT, variable ordering vital yvalue ordering less important
8 Unit Propagation zOne heuristic in SAT is vital to success zWhen we have a unit clause … ye.g. +A ywe must set A = true yif we set A = false the clause is unsatisfied, so is the whole problem zA unit clause might be in the original problem zor contain only one unset variable after simplification ye.g. clauses (aBC), (abc), xset A = upper case, B = lower case xwhat unit clause remains?
9 Unit Propagation ze.g. clauses (aBC), (abc), xset A = upper case, B = lower case xwhat unit clause remains? zA = upper gives (BC), (bc) zB = lower case satisfies (bc) yreduces (BC) to (C) zThe unit clause is (C) zWe should set C = upper case yirrespective of other clauses in the problem zsetting one unit clause can create a new one … yleading to a cascade/chain reaction called unit propagation
10 Depth First + Unit Propagation zUnit propagation is vital in SAT zWhenever there is a not-yet-satisfied unit clause yset the corresponding variable to True if literal positive xfalse if literal negative zUse this to override all other heuristics zLater in lecture will think about other heuristics to use as well zNext we will look at another algorithm
11 Davis-Putnam zThe best complete algorithm for SAT is Davis- Putnam yfirst work by Davis-Putnam 1961 ycurrent version by Davis-Logemann-Loveland 1962 yvariously called DP/DLL/DPLL or just Davis-Putnam yI will present a slight variant omitting “Pure literal” rule zA recursive algorithm zTwo stopping cases yan empty set of clauses is trivially satisfiable yan empty clause is trivially unsatisfiable xthere is no way to satisfy the clause
12 Algorithm DPLL (clauses) z1. If clauses is empty clause set, Succeed z2. If clauses contains an empty clause, Fail z3. If clauses contains a unit clause (literal) yreturn result of DPLL(clauses[literal]) yclauses[literal] means simplify clauses with value of literal z4. Else heuristically choose a variable u yheuristically choose a value v y4.a. If DPLL(clauses[u:=v]) succeeds, Succeed y4.b. Else return result of DPLL(clauses[u:= not v])
13 DPLL success zAbout 40 years old, DPLL is still the most successful complete algorithm for SAT zIntensive research on variants of DPLL in the 90s ymostly very close to the 1962 version zImplementation can be very efficient zMost work on finding good heuristics yGood heuristics should find solution quickly xor work out quickly that there is no solution
14 It’s the same thing (again) zDPLL is just depth first search + unit propagation zWe’ve now got three presentations of the same thing ysearch trees yalgorithm based on lists yDPLL zShows the general importance of depth first search
15 Heuristics for DPLL zWe need variable ordering heuristics ycan easily make the difference between success/failure zTradeoff between simplicity and effectiveness zThree very simple variable ordering heuristics ylexicographic: choose A before B before C before … yrandom: choose a random variable yfirst occurrence: choose first variable in first clause zPros: all very easy to implement zCons: ineffective except on very small or easy problems
16 How can we design better heuristics zAll the basic heuristics listed are unlikely to make the best choice except by good luck zWe want to choose variables likely to finish search quickly zHow can we design heuristics to do this? yPick variables occurring in lots of clauses? yPrefer short clauses (AB) or long clauses (ABCDEFG) ? yPick variables occurring more often positively?? zWe need some design principles underlying our search
17 Three Design Principles zThe Constrainedness Hypothesis yChoose variables which are more constrained than other variables (e.g. pack suits before toothbrush for interview trip) yMotivation: Most constrained first xattack the most difficult part of the problem xit should either fail or succeed and make the rest easy zThe Satisfaction Hypothesis yTry to choose variables which seem likely to come closest to satisfying the problem yMotivation: we want to find a solution, so choose the variable which comes as close to that as possible
18 Three Design Principles zThe simplification hypothesis yTry to choose variables which will simplify the problem as much as possible via unit propagation yMotivation: search is exponential in the size of the problem so making the problem small quickly minimizes search zLet’s look at 3 heuristics based on these principles ynot wildly different from each other yoften different principles give similar heuristics
19 Most Constrained First zShort clauses are most constraining y(A B) rules out 1/4 of all solutions y(A B C D E) only rules out 1/32 of all solutions zTake account only of shortest clauses ye.g. shortest clause in a problem may be of length 2 zSeveral variants on this idea yfirst occurrence in shortest clause ymost occurrences in shortest clauses (usually many such) yfirst occurrence in all positive shortest clause
20 Satisfaction Hypothesis zTry to satisfy as much as possible with next literal zTake account of different lengths yclause of length i rules out a fraction 2 -i of all solutions yweight each clause by the number 2 -i zFor each literal, calculate weighted sum yadd the weight of each clause the literal appears in ythe larger this sum, the more difficulties are eliminated zThis is the Jeroslow-Wang Heuristic zVariable and value ordering
21 Simplification Hypothesis zWe want to simplify problem as much as possible yI.e. get biggest possible cascade of unit propagation zOne approach is to suck it and see ymake an assignment, see how much unit propagation occurs, yafter testing all assignments, choose the one which caused the biggest cascade yexhaustive version is expensive (2n probes necessary) ySuccessful variants probe a small number of promising variables (e.g. from most constrained heuristic)
22 Conclusions zUnit propagation vital to SAT zDavis Putnam (DP/DLL/DPLL) successful y= depth first + unit propagation zNeed heuristics, especially variable ordering zThree design principles help zNot yet clear which is the best zHeuristic design is still a black art
Your consent to our cookies if you continue to use this website.