Presentation is loading. Please wait.

Presentation is loading. Please wait.

Backtracking search: look-back

Similar presentations


Presentation on theme: "Backtracking search: look-back"— Presentation transcript:

1 Backtracking search: look-back
ICS 179 Spring 2010

2 Look-back: Backjumping / Learning
In deadends, go back to the most recent culprit. Learning: constraint-recording, no-good recording. good-recording

3 Backjumping (X1=r,x2=b,x3=b,x4=b,x5=g,x6=r,x7={r,b})
(r,b,b,b,g,r) conflict set of x7 (r,-,b,b,g,-) c.s. of x7 (r,-,b,-,-,-,-) minimal conflict-set Leaf deadend: (r,b,b,b,g,r) Every conflict-set is a no-good Radu, can you improve that?

4 Gaschnig jumps only at leaf-dead-ends Internal dead-ends: dead-ends that are non-leaf

5 Gaschnig jumps only at leaf-dead-ends Internal dead-ends: dead-ends that are non-leaf

6 Backjumping styles Jump at leaf only (Gaschnig 1977)
Context-based Graph-based (Dechter, 1990) Jumps at leaf and internal dead-ends, graph information Conflict-directed (Prosser 1993) Context-based, jumps at leaf and internal dead-ends

7 Gaschnig’s backjumping: Culprit variable
If a_i is a leaf deadend and x_b its culprit variable, then a_b is a safe backjump destination and a_j, j<b is not. The culprit of x7 (r,b,b,b,g,r) is (r,b,b)  x3

8 Gaschnig’s backjumping Implementation [1979]
Gaschnig uses a marking technique to compute culprit. Each variable xj maintains a pointer (latest_j) to the latest ancestor incompatible with any of its values. While forward generating , keep array latest_i, 1<=j<=n, of pointers to the last value conflicted with some value of x_j The algorithm jumps from a leaf-dead-end x_{i+1} back to latest_(i+1) which is its culprit.

9 Graph-based backjumping scenarios Internal deadend at X4
} , { ) ( 3 1 4 5 6 7 x I = Graph-based backjumping scenarios Internal deadend at X4 Scenario 1, deadend at x4: Scenario 2: deadend at x5: Scenario 3: deadend at x7: Scenario 4: deadend at x6:

10 Graph-based backjumping
Uses only graph information to find culprit Jumps both at leaf and at internal dead-ends Whenever a deadend occurs at x, it jumps to the most recent variable y connected to x in the graph. If y is an internal deadend it jumps back further to the most recent variable connected to x or y. The analysis of conflict is approximated by the graph. Graph-based algorithm provide graph-theoretic bounds.

11 Graph-based backjumping on DFS orderings

12 DFS of graph and induced graphs
Spanning-tree of a graph; DFS spanning trees, BFS spanning trees.

13 Complexity of Backjumping uses pseudo-tree analysis
Simple: always jump back to parent in pseudo tree Complexity for csp: exp(tree-depth) Complexity for csp: exp(w*log n)

14 Look-back: No-good Learning
Learning means recording conflict sets used as constraints to prune future search space. (x1=2,x2=2,x3=1,x4=2) is a dead-end Conflicts to record: (x1=2,x2=2,x3=1,x4=2) 4-ary (x3=1,x4=2) binary (x4=2) unary

15 Learning, constraint recording
Learning means recording conflict sets An opportunity to learn is when deadend is discovered. Goal of learning to not discover the same deadends. Try to identify small conflict sets Learning prunes the search space.

16 Learning example

17 Learning Issues Learning styles Non-systematic randomized learning
Graph-based or context-based i-bounded, scope-bounded Relevance-based Non-systematic randomized learning Implies time and space overhead Applicable to SAT

18 Complexity of Backtrack-Learning for CSP
The complexity of learning along d is time and space exponential in w*(d): The number of dead-ends is bounded by Number of constraint tests per dead-end are Space complexity is Time complexity is

19 Complexity of Backtrack-Learning for CSP
The complexity of learning along d is time and space exponential in w*(d): The number of dead-ends is bounded by Number of constraint tests per dead-end are Space complexity is Time complexity is Learning and backjumping: O(n m e k^w*(d)) m- depth of tree, e- number of constraints

20 Non-Systematic Randomized Learning
Do search in a random way with interupts, restarts, unsafe backjumping, but record conflicts. Guaranteed completeness.

21 Relationships between various backtracking algrithms

22 Empirical comparison of algorithms
Benchmark instances Random problems Application-based random problems Generating fixed length random k-sat (n,m) uniformly at random Generating fixed length random CSPs (N,K,T,C) also arity, r.

23 Average case complexity of Random problems
We need distribution P: the probability that a literal will be in a clause Those can be solved in polynomial time on the average Generate random k-cnf ; fixed length formulas having m clauses, all of size 3 (or k). Selected uniformy at random. What will be the performance when number of clauses is small? When it is large? In between? Pick at m/n =4.2 Also at that point probability of satisfiability shifts from 1 to 0.

24 The Phase transition (m/n)

25 Some empirical evaluation
Sets 1-3 reports average over 2000 instances of random csps from 50% hardness. Set 1: 200 variables, set 2: 300, Set 3: 350. All had 3 values.: Dimacs problems

26 State-of-the-art in SAT solvers
Vibhav Gogate

27 SAT formulas A set of propositional variables and clauses involving variables (x1+x2’+x3) and (x2+x1’+x4) x1,x2, x3 and x4 are variables (True or false) Literals: Variable and its negation x1 and x1’ A clause is satisfied if one of the literals is true x1=true satisfies clause 1 x1=false satisfies clause 2 Solution: An assignment that satisfies all clauses

28 SAT solvers Given 10 minutes of time Started with DPLL (1962)
Able to solve variable problems Satz (Chu Min Li, 1995) Able to solve some 1000 variable problems Chaff (Malik et al., 2001) Intelligently hacked DPLL , Won the 2004 competition Able to solve some variable problems Current state-of-the-art Minisat and SATELITEGTI (Chalmer’s university, ) Jerusat and Haifasat (Intel Haifa, 2002) Ace (UCLA, )

29 DPLL Example {p,r},{p,q,r},{p,r} {T,r},{T,q,r},{T,r}
p=T p=F {T,r},{T,q,r},{T,r} {F,r},{F,q,r},{F,r} SIMPLIFY SIMPLIFY {q,r} {r},{r} SIMPLIFY {}

30 DPLL Algorithm as seen by SAT solver
While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT;

31

32 Chaff implementation While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT; Use conflict-directed backjumping + Learning

33 Learning Adding information about the instance into the solution process without changing the satisfiability of the problem. In CNF representation it is accomplished by adding clauses into the clause database Knowledge of failure may help search in other spaces Learning is very effective in pruning the search space for structured problems It is of limited use for random instances Why? Still an open question

34 Chaff implementation While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT; Boolean constraint propagation: Main factor

35 Naive Implementation of Deduce or Unit propagation
Check every clause after an assignment is made and reduce it if possible Repeat if a unit clause is generated (implication) After backtrack, revert all clauses to their original form as they were before. Very slow. A solver would spend 85-90% of the time doing unit propagation Why not speed it up?

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52 Chaff implementation While (1) { if (decide_next_branch()) { //1. Branching while (deduce()==conflict) { //2. Deducing blevel=analyze_conflicts() // 3. Learning if (blevel < 0) return UNSAT else backtrack(blevel) // 4. Backtracking } else RETURN UNSAT; Variable ordering heuristics

53

54 Other issues Clause Deletion Winner of SAT competition 2004.
Learned clauses slows down the bcp and eat up memory Delete clauses periodically Various heuristics for this purpose Winner of SAT competition 2004.

55 But Chaff no longer State of the Art
More hacking Minisat (2006, winner of SAT RACE 2006) Based on chaff but a better faster implementation Some new things like conflict analysis and minimization but basically same as chaff

56 Benchmarks Random Crafted Industrial

57 Stochastic greedy local search Chapter 7

58 Example: 8-queen problem

59 Main elements Choose a full assignment and iteratively improve it towards a solution Requires a cost function: number of unsatisfied constraints or clasuses. Neural networks use energy minimization Drawback: local minimas Remedy: introduce a random element Cannot decide inconsistency

60 Algorithm Stochastic Local search (SLS)

61 Example: CNF Example: z divides y,x,t z = {2,3,5}, x,y = {2,3,4}, t={2,5,6}

62 Heuristics for improving local search
Plateau search: at local minima continue search sideways. Constraint weighting: use weighted cost function The cost C_i is 1 if no violation. At local minima increase the weights of violating constraints. Tabu search: prevent backwards moves by keeping a list of assigned variable-values. Tie-breaking rule may be conditioned on historic information: select the value that was flipped least recently Automating Max-flips: Based on experimenting with a class of problems Given a progress in the cost function, allow the same number of flips used up to current progress.

63 Random walk strategies
Combine random walk with greediness At each step: choose randomly an unsatisfied clause. with probability p flip a random variable in the clause, with (1-p) do a greedy step minimizing the breakout value: the number of new constraints that are unsatisfied

64 Figure 7.2: Algorithm WalkSAT

65 Example of walkSAT: start with assignment of true to all vars

66 Properties of local search
Guarantee to terminate at local minima Random walk on 2-sat is guaranteed to converge with probability 1 after N^2 steps, when N is the number of variables. Proof: A random assignment is on the average N/2 flips away from a satisfying assignment. There is at least ½ chance that a flip of a 2-clause will reduce the distance to a given satisfying assignment by 1. Random walk will cover this distance in N^2 steps on the average. Analysis breaks for 3-SAT Empirical evaluation shows good performance compared with complete algorithms (see chapter and numerous papers)


Download ppt "Backtracking search: look-back"

Similar presentations


Ads by Google