10/1  Project 2 will be given out by 10/3 Will likely be on Sudoku solving using CSP techniques  Mid-term will likely be around 10/17.

Slides:



Advertisements
Similar presentations
Agents that reason logically Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Advertisements

Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Inference and Reasoning. Basic Idea Given a set of statements, does a new statement logically follow from this. For example If an animal has wings and.
We have seen that we can use Generalized Modus Ponens (GMP) combined with search to see if a fact is entailed from a Knowledge Base. Unfortunately, there.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Logic Use mathematical deduction to derive new knowledge.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Mar, 4, 2015 Slide credit: some slides adapted from Stuart.
Review: Constraint Satisfaction Problems How is a CSP defined? How do we solve CSPs?
Properties of SLUR Formulae Ondřej Čepek, Petr Kučera, Václav Vlček Charles University in Prague SOFSEM 2012 January 23, 2012.
4 th Nov, Oct 23 rd Happy Deepavali!. 10/23 SAT & CSP.
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Inference rules Sound (but incomplete) –Modus Ponens A=>B, A |= B –Modus tollens A=>B,~B |= ~A –Abduction (??) A => B,~A |= ~B –Chaining A=>B,B=>C |= A=>C.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]
Methods of Proof Chapter 7, second half.
Search in the semantic domain. Some definitions atomic formula: smallest formula possible (no sub- formulas) literal: atomic formula or negation of an.
Knoweldge Representation & Reasoning
Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]
Last time Proof-system search ( ` ) Interpretation search ( ² ) Quantifiers Equality Decision procedures Induction Cross-cutting aspectsMain search strategy.
Python logic Tell me what you do with witches? Burn And what do you burn apart from witches? More witches! Shh! Wood! So, why do witches burn? [pause]
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Prop logic First order predicate logic (FOPC) Prob. Prop. logic Objects, relations Degree of belief First order Prob. logic Objects, relations.
Knowledge Representation II (Inference in Propositional Logic) CSE 473 Continued…
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
10/28 Homework 3 returned Homework 4 socket opened (No office hours today) Where hard problems are Phase Transition.
Logic - Part 2 CSE 573. © Daniel S. Weld 2 Reading Already assigned R&N ch 5, 7, 8, 11 thru 11.2 For next time R&N 9.1, 9.2, 11.4 [optional 11.5]
Propositional Logic Reasoning correctly computationally Chapter 7 or 8.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Logics for Data and Knowledge Representation Propositional Logic: Reasoning Originally by Alessandro Agostini and Fausto Giunchiglia Modified by Fausto.
Boolean Satisfiability and SAT Solvers
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 19, 2012.
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE COS302 MICHAEL L. LITTMAN FALL 2001 Satisfiability.
Pattern-directed inference systems
Logical Agents Logic Propositional Logic Summary
Constraint Satisfaction CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
Explorations in Artificial Intelligence Prof. Carla P. Gomes Module Logic Representations.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
First-Order Logic and Inductive Logic Programming.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
CPSC 422, Lecture 21Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 21 Oct, 30, 2015 Slide credit: some slides adapted from Stuart.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Propositional Logic or how to reason correctly Chapter 8 (new edition) Chapter 7 (old edition)
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Inference in Propositional Logic (and Intro to SAT)
EA C461 – Artificial Intelligence Logical Agent
The Propositional Calculus
First-Order Logic and Inductive Logic Programming
Artificial Intelligence: Agents and Propositional Logic.
CS 416 Artificial Intelligence
Methods of Proof Chapter 7, second half.
Presentation transcript:

10/1  Project 2 will be given out by 10/3 Will likely be on Sudoku solving using CSP techniques  Mid-term will likely be around 10/17

Assertions; t/f Epistemological commitment Ontological commitment t/f/u Deg belief facts Facts Objects relations Prop logic Prob prop logic FOPCProb FOPC

What is “monotonic” vs. “non- monotonic” logic? Prop calculus (as well as the first order logic we shall discuss later) are monotonic, in that once you prove a fact F to be true, no amount of additional knowledge can allow us to disprove F. But, in the real world, we jump to conclusions by default, and revise them on additional evidence –Consider the way the truth of the statement “F: Tweety Flies” is revised by us when we are given facts in sequence: 1. Tweety is a bird (F)2. Tweety is an Ostritch (~F) 3. Tweety is a magical Ostritch (F) 4. Tweety was cursed recently (~F) 5. Tweety was able to get rid of the curse (F) How can we make logic show this sort of “defeasible” (aka defeatable) conclusions? –Many ideas, with one being negation as failure –Let the rule about birds be Bird & ~abnormal => Fly The “abnormal” predicate is treated specially—if we can’t prove abnormal, we can assume ~abnormal is true (Note that in normal logic, failure to prove a fact F doesn’t allow us to assume that ~F is true since F may be holding in some models and not in other models). –Non-monotonic logic enterprise involves (1) providing clean semantics for this type of reasoning and (2) making defeasible inference efficient

Fuzzy Logic vs. Prob. Prop. Logic Fuzzy Logic assumes that the word is made of statements that have different grades of truth –Recall the puppy example Fuzzy Logic is “Truth Functional”—i.e., it assumes that the truth value of a sentence can be established in terms of the truth values only of the constituent elements of that sentence. PPL assumes that the world is made up of statements that are either true or false PPL is truth functional for “truth value in a given world” but not truth functional for entailment status. There was a big discussion on this

 is true in all worlds (rows) Where KB is true…so it is entailed

KB&~  False So, to check if KB entails , negate , add it to the KB, try to show that the resultant (propositional) theory has no solutions (must have to use systematic methods) Proof by model checking

Connection between Entailment and Satisfiability The Boolean Satisfiability problem is closely connected to Propositional entailment –Specifically, propositional entailment is the “conjugate” problem of boolean satisfiability (since we have to show that KB & ~f has no satisfying model to show that KB |= f) Of late, our ability to solve very large scale satisfiability problems has increased quite significantly

Entailment & Satisfiability SAT (boolean satisfiability) problem Given a set of propositions And a set of (CNF) clauses Find a model (an assignment of t/f values to propositions) that satisfies all clauses –k-SAT is a SAT problem where all clauses are length less than or equal to k »SAT is NP-complete; »1-SAT and 2-SAT are polynomial »k-SAT for k> 2 is NP-complete (so 3-SAT is the smallest k-SAT that is NP-Complete) –If we have a procedure for solving SAT problems, we can use it to compute entailment If the sentence S is entailed, if negation of S, when added to the KB, gives a SAT theory that is unsatisfiable (NO MODEL) –CO-NP-Complete –SAT is useful for modeling many other “assignment” problems We will see use of SAT for planning; it can also be used for Graph coloring, n-queens, Scheduling and Circuit verification etc (the last thing makes SAT VERY interesting for Electrical Engineering folks) –Our ability to solve very large scale SAT problems has increased quite phenomenally in the recent years We can solve SAT instances with millions of variables and clauses very easily To use this technology for inference, we will have to consider systematic SAT solvers.

Model-checking by Stochastic Hill-climbing Start with a model (a random t/f assignment to propositions) For I = 1 to max_flips do –If model satisfies clauses then return model –Else clause := a randomly selected clause from clauses that is false in model With probability p whichever symbol in clause maximizes the number of satisfied clauses /*greedy step*/ With probability (1-p) flip the value in model of a randomly selected symbol from clause /*random step*/ Return Failure Remarkably good in practice!! Clauses 1. (p,s,u) 2. (~p, q) 3. (~q, r) 4. (q,~s,t) 5. (r,s) 6. (~s,t) 7. (~s,u) Consider the assignment “all false” -- clauses 1 (p,s,u) & 5 (r,s) are violated --Pick one—say 5 (r,s) [if we flip r, 1 (remains) violated if we flip s, 4,6,7 are violated] So, greedy thing is to flip r we get all false, except r otherwise, pick either randomly

Progress in nailing the bound.. (just FYI) Not discussed in class

Inference rules Sound (but incomplete) –Modus Ponens A=>B, A |= B –Modus tollens A=>B,~B |= ~A –Abduction (??) A => B,~A |= ~B –Chaining A=>B,B=>C |= A=>C Complete (but unsound) –“Python” logic How about SOUND & COMPLETE? --Resolution (needs normal forms)

If WMDs are found, the war is justified W=>J If WMDs are not found, the war is still justified ~W=>J Is the war justified anyway? |= J? Can Modus Ponens derive it? Need something that does case analysis

If WMDs are found, the war is justified W=>J If WMDs are not found, the war is still justified ~W=>J Is the war justified anyway? |= J? Can Modus Ponens derive it? Need something that does case analysis

Forward apply resolution steps until the fact f you want to prove appears as a resolvent Backward (Resolution Refutation) Add negation of the fact f you want to derive to KB apply resolution steps until you derive an empty clause

Don’t need to use other equivalences if we use resolution in refutation style ~J ~W ~ W V J W V J J If WMDs are found, the war is justified ~W V J If WMDs are not found, the war is still justified W V J Is the war justified anyway? |= J? J V J =J

Don’t need to use other equivalences if we use resolution in refutation style ~J ~W ~ W V J W V J W V ~W ~W J If WMDs are found, the war is justified ~W V J If WMDs are not found, the war is still justified W V J Either WMDs are found or they are not found W V ~W Is the war justified anyway? |= J? W V J J V J =J Resolution does case analysis

Prolog without variables and without the cut operator Is doing horn-clause theorem proving For any KB in horn form, modus ponens is a sound and complete inference Aka the product of sums form From CSE/EEE 120 Aka the sum of products form

Conversion to CNF form CNF clause= Disjunction of literals –Literal = a proposition or a negated proposition –Conversion: Remove implication Pull negation in Use demorgans laws to distribute disjunction over conjunction Separate conjunctions into clauses ANY propositional logic sentence can be converted into CNF form Try: ~(P&Q)=>~(R V W)

Need for resolution Yankees win, it is Destiny ~YVD Dbacks win, it is Destiny ~Db V D Yankees or Dbacks win Y V Db Is it Destiny either way? |= D? Can Modus Ponens derive it? Not until Sunday, when Db won DVY DVD == D Resolution does case analysis Don’t need to use other equivalences if we use resolution in refutation style ~D ~Y ~Y V D ~Db V D Y V Db Db D

Steps in Resolution Refutation Consider the following problem –If the grass is wet, then it is either raining or the sprinkler is on GW => R V SP ~GW V R V SP –If it is raining, then Timmy is happy R => TH ~R V TH –If the sprinklers are on, Timmy is happy SP => TH ~SP V TH –If timmy is happy, then he sings TH => SG ~TH V SG –Timmy is not singing ~SG –Prove that the grass is not wet |= ~GW? GW R V SP TH V SP SG V SP SP TH SG Is there search in inference? Yes!! Many possible inferences can be done Only few are actually relevant --Idea: Set of Support At least one of the resolved clauses is a goal clause, or a descendant of a clause derived from a goal clause -- Used in the example here!!

What if the new fact is inconsistent with KB? Suppose we have a KB {P, P => ~F, Q=>J, R}; and our friend comes running to tell you that M and F are true in the world. We notice that we can’t quite add F to KB since ~F is entailed. So what are our options? –Ask our friend to take a hike –Revise our theory so that F can be accommodated. To do this, we need to ensure that ~F is not entailed..which means we have to stop the proof of ~F from going through. –Since the proof for ~F is {P, P=>~F |= ~F}, we have to either change the sentence P or the sentence P=>~F so that the proposition won’t go through –Often there are many ways of doing this revision with little guidance as to which revision is the best »For example, we could change the second sentence to P&~M => ~F »(But we could equally well have changed the sentence to P& L => ~F)

10/3 Agenda: (as actually happened in the class) 1.Project 2 code description (sudoku puzzle) 2.Long discussion on k-consistency and enforcement of k- consistency 3.Discussion of DPLL algorithm 4.Discussion of state of the art in SAT solvers

Discussion of Project 2 code Notice that constraints can be represented either as pieces of code or declaratively as legal/illegal partial assignments –Sometimes the “pieces of code” may be more space efficient –(SAT solvers, unlike CSP solvers, expect explicit constraints—represented as clauses)

Why are CSP problems hard? Because what seems like a locally good decision (value assignment to a variable), may wind up leading to global inconsistency But what if we pre-process the CSP problem such that the local choices are more likely to be globally consistent? –Two CSP problems CSP1 and CSP2 are considered equivalent if both of them have the same solutions. Related to the way artificial potential fields can be set up for improving hill-climbing..

Pre-processing to enforce consistency An n-variable CSP problem is said to be k- consistent iff every consistent assignment for (k-1) of the n variables can be extended to include any k- th variable Strongly k-consistent if it is j-consistent for all j from 1 to k Higher the level of (strong) consistency of problem, the lesser the amount of backtracking required to solve the problem –A CSP with strong n-consistency can be solved without any backtracking We can improve the level of consistency of a problem by explicating implicit constraints –Enforcing k-consistency is of O(n k ) complexity Break-even seems to be around k=2 (“arc consistency”) or 3 (“path consistency”) Special terminology for binary CSPs 2-consistency is called “Arc” consistency (since you need only considers pairs of variables connected by an edge in the constraint graph) 3-consistency is called “path” consistency

1 23 nh0h0 Cost of enforcing the consistency Cost of searching with the heuristic Total cost incurred in search How much consistency should we enforce? Overloading new semantics on an old graphic Degree of consistency (measured in k-strong-consistency)

Consistency and Hardness In the worst case, a CSP can be solved efficiently (i.e., without backtracking) only if it is strongly n-consistent However, in many problems, enforcing k-consistency automatically renders the problem n-consistent as a side- effect –In such a case, we can clearly see that the problem is solvable in O(n k ) time (basically the time taken for pre-processing) The hardness of a CSP problem can be thought of in terms of the “degree of consistency” that needs to be enforced on that CSP before it can be solved efficiently (backtrack- free)

Enforcing Arc Consistency: An example X:{1,2,3} Y:{1,2,3} Z:{1,2,3} X<Y Y<Z X:{1} Y:{2} Z:{3} X<Y Y<Z When we enforce arc-consistency on the top left CSP (shown as a constraint graph), we get the CSP shown in the bottom left. Notice how the domains have reduced. Here is an explanation of what happens. Suppose we start from Z. If Z=1, then Y can’t have any valid values. So, we remove 1 from Z’s domain. If Z=2, or 3 then Y can have a valid value (since Y can be 1 or 2). Now we go to Y. If Y=1, then X can’t have any value. So, we remove 1 from X’s domain. If Y=3, then Z can’t have any value. So, we remove 3 from Y’s domain. So Y has just 2 in its domain! Now notice that Y’s domain has changed. So, we re-consider Z (since anytime Y’s domain changes, it is possible that Z’s domain gets affected). Sure enough, Z=2 no longer works since Y can only be 2 and so it can’t take any value if Z=2. So, we remove 2 also from Z’s domain. So, Z’s domain is now just 3! Now, we go to X. X can’t be 2 or 3 (since for either of those values, Y will not have a value. So, X has domain 1! Notice that in this case, arc-consistency SOLVES the problem, since X,Y and Z have exactly 1 value each and that is the only possible solution. This is not always the case (see next example).

Graph rectification as an analog for local consistency in normal search Local consistency involves “pre- processing” the search space so later search is faster. One way we could do it for normal graph search is to do a k-lookahead from each state and revise a node’s actual distance from its neighbors –Running value iteration for a few iterations has exactly this effect..

Consistency enforcement as inferring implicit constraints In general, enforcing consistency involves explicitly adding constraints that are implicit given the existing constraints –E.g. In enforcing 3-consistency if we find that for a particular 2-label {xi=v1 & xj=v2} there is no possible consistent value of xk, then we write this as an additional constraint {xi=v1}=> {xj != v2} –[Domain reduction is just a special case] When enforcing 2-consistency (or arc-consistency), the new constraints will be of the form xi!=v1, and so these can be represented by just contracting the domain of xi by pruning v1 from it Unearthing implicit constraints can also be interpreted as “inferring” new constraints that hold (are “entailed”) given the existing constraints –In the context of boolean CSPs (I.e., propositional satisfiability problems), the analogy is even more striking since unearthing new constraints means writing down new clauses (or “facts”) that are entailed given the existing clauses –This interpretation shows that consistency enforcement is just a form of “inference”/ “entailment computation” process. [Conditioning & Inference—The Yin and Yang] There is a general idea that in solving a search problem, you can interleave two different processes “inference” trying to either infer the solution itself or saying no solution exists “conditioning or enumeration”which attempts to systematically go through potential solutions looking for a real solution. – Good search algorithms interleave both inference and conditioning E.g. the CSP algorithm we discussed in the class uses backtracking search (enumeration), and forward checking (inference).

More on arc-consistency Here is a binary CSP that Is arc-consistent but has no Solution. Arc-consistency doesn’t always imply that the CSP has a solution or that there is no search required. In the previous example, if each variable had domains 1,2,3,4, then at the end of enforcing arc-consistency, each variable will still have 2 values in its domain—thus necessitating search. Here is another example which shows that the search may find that there is no solution for the CSP, even though it is arc-consistent.

Approximating K-Consistency K-consistency enforcement takes O(n k ) effort. Since we are doing this only to reduce the overall search effort (rather than to get a separate badge for consistency), we can cut corners [Directional K-consistency] Given a fixed permutation (order) over the n variables, assignment to first k-1 variables can be extended to the k- th variable –Clearly cheaper than K-consistency –If we know that the search will consider variables in a fixed order, then enforcing directional consistency w.r.t. that order is better. [Partial K-consistency enforcement] Run the K-consistency enforcement algorithm partially (i.e., stop before reaching fixed-point) –Put a time-limit on the consistency computation Recall how we could cut corners in computing Pattern Database heuristics by spending only a limited time on the PDB and substituting other cheaper heuristics in their place –Only do one pass of the consistency enforcement This is what “forward checking” does..

AC is the strongest It propagates Changes in all directions Until we reach a fixed point (no further changes) Arc-Consistency > directed arc-consistency > Forward Checking X:{1,2,3} Y:{1,2,3} Z:{1,2,3} X<Y Y<Z X:{1,2} Y:{2} Z:{1,2,3} X<Y Y<Z X:{1} Y:{2} Z:{3} X<Y Y<Z X:{1} Y:{2,3} Z:{1,2,3} X<Y Y<Z After arc-consistency After directional arc-consistency Assuming the variable order X<Y<Z After forward checking assuming X<Y<Z, and X has been set to value 1 ADDED AFTER CLASS IMPORTANT DAC:For each variable u, we only consider The effects on the variables that Come after u in the ordering FC: We start with the current Assignment for some of the Variables, and only consider their Effects on the future variables. (only a single level Propagation is done. After we find that a value of Y is pruned, we don’t try To see if that changes domain of Z) > : “stronger than”

Davis-Putnam-Logeman-Loveland Procedure  detect failure

DPLL Example Clauses (p,s,u) (~p, q) (~q, r) (q,~s,t) (r,s) (~s,t) (~s,u) Pick p; set p=true unit propagation (p,s,u) satisfied (remove) p;(~p,q)  q derived; set q=T (~p,q) satisfied (remove) (q,~s,t) satisfied (remove) q;(~q,r)  r derived; set r=T (~q,r) satisfied (remove) (r,s) satisfied (remove) pure literal elimination in all the remaining clauses, s occurs negative set ~s=True (i.e. s=False) At this point all clauses satisfied. Return p=T,q=T;r=T;s=False s was not Pure in all clauses (only The remaining ones)

Model-checking by Stochastic Hill-climbing Start with a model (a random t/f assignment to propositions) For I = 1 to max_flips do –If model satisfies clauses then return model –Else clause := a randomly selected clause from clauses that is false in model With probability p whichever symbol in clause maximizes the number of satisfied clauses /*greedy step*/ With probability (1-p) flip the value in model of a randomly selected symbol from clause /*random step*/ Return Failure Remarkably good in practice!! but not complete and so can’t be used to do entailment Clauses 1. (p,s,u) 2. (~p, q) 3. (~q, r) 4. (q,~s,t) 5. (r,s) 6. (~s,t) 7. (~s,u) Consider the assignment “all false” -- clauses 1 (p,s,u) & 5 (r,s) are violated --Pick one—say 5 (r,s) [if we flip r, 1 (remains) violated if we flip s, 4,6,7 are violated] So, greedy thing is to flip r we get all false, except r otherwise, pick either randomly

Lots of work in SAT solvers DPLL was the first (late 60’s) Circa 1994 came GSAT (hill climbing search for SAT) Circa 1997 came SATZ –Branches on the variable that causes the most amount of unit propagation.. Circa came RelSAT –Does dependency directed backtracking.. ~2000 came CHAFF Current champion: Siege Current best can be found at – –A primer on solvers circa 2005 is at