Presentation is loading. Please wait.

Presentation is loading. Please wait.

Constraint Solving: Problems, Domains and Search Methods

Similar presentations


Presentation on theme: "Constraint Solving: Problems, Domains and Search Methods"— Presentation transcript:

1 Constraint Solving: Problems, Domains and Search Methods
Jacques Robin

2 Outline What is constraint solving? Constraint domains
Constraint solving inference services Practical applications of CSP General problem solving through search Finite domain Constraint Solving Problem (CSP) solving through search CSP search algorithms

3 What is Constraint Solving?
A versatile paradigm for symbolic, numerical and hybrid symbolico-numerical automated reasoning Relies on hybrid logical-numerical knowledge representation formalism Relies on AI search, term rewriting, operation research and mathematical inference algorithms Allows reasoning with incomplete information Takes as input intentional and extensional knowledge When input knowledge is consistent and complete, returns as output extensional knowledge When input knowledge is consistent but incomplete, returns as output intentional and extensional knowledge that is more concise and easy to understand than the input Identifies inconsistent input knowledge Most other automated reasoning paradigms (monotonic deduction, belief revision, belief update, abduction, planning, optimization) can be reformulated as one form of constraint solving

4 Constraint Solving Problems (CSP)
Input: Set of variables, each one associated with an associated domain of possible values (constants) Set of functions defining mapping between these domains Set of relations (called primitive constraints), including equations and inequations, over these domains A logical conjunction of such relations (called a compound constraints) Output: Composed of same elements as input If the input is just rightly constrained, the output is one complete consistent variable valuation, i.e., a logical conjunction of equations of the form <variable> = <constant value>. If the input is underconstrained, the output is a simplification of the input, i.e., a logically equivalent conjunction of primitive constraints containing fewer constraints and/or functions and/or variables. If the input is overconstrained, the output is “fail” for there exists no variable valuation that simultaneously satisfy all constraints.

5 CSP Example: Analog Circuit Modeling
Particular circuit instance data sets: PD1 = ( V = 10  R1 = 10  R2 = 5 ), extensional PD2 = ( V = 10  R2 = 10  I1 = 5 ), extensional PD3 = ( R1 = 10  R2 = 5 ), extensional PD4 = ( V = 10  R1 = 5  I = 1  0  R2 ), intensional Solving particular circuit instance model PM1 = GM  PD1 yields extensional consistent solution: V = 10  V1 = 10  V2 = 10  R1 = 10  R2 = 5  I = 3  I1 = 1  I2 = 2 Solving particular circuit instance model PM2 = GM  PD2 yields extensional consistent solution: V = 10  V1 = 10  V2 = 10  R1 = 2  R2 = 10  I = 6  I1 = 5  I2 = 1 Solving particular circuit instance model PM2 = GM  PD2 yields intentional consistent solution: V x 3 = I x 10 Solving particular circuit instance model PM4 = GM  PD4 yields fail (inconsistent input) Intentional generic circuit class model GM:

6 CSP Example: Building Scheduling
Building a House Doors 2 days Stage B Interior Walls 4 days Chimney 3 days Stage D Stage E Tiles Roof Windows Stage C Exterior Walls Stage A Foundations 7 days Stage S Particular query Q1: TS = 0  Tm = min(TE) Solution to particular problem GM  Q1: TS = 0  TA = 7  TB = 11  TC = 10  TD = 12  TE = 15  Tm = 15 Particular query Q2: TE  14 Solution to particular problem GM  Q2: fail start foundations interior walls exterior walls chimney roof doors tiles windows Generic Building Model GM:

7 CSP Example: Map Coloring
Generic Australia map coloring model AMCM: WT  SO  WT  NT  NT  SO  NT  Q  Q  SO  Q  NSW  NSW  SO  NSW  V  V  SO Color set instance BGR: WT  {blue, green, red}  SO  {blue, green, red}  NT  {blue, green, red}  Q  {blue, green, red}  NSW  {blue, green, red}  V  {blue, green, red}  T  {blue, green, red} Solving specific Australian map coloring problem AMCM  BGR yields complete consistent solution: SO = blue  WA = red  NT = green  Q = red  NSW = green  V = red  T = green Solving specific Australian map coloring problem with any set instance with two-color domains for all variables yields fail

8 The Language of CSP: MOF Metamodel
FOL Atom FOL Term 0 ..* args 0 .. * FOL Formula <<enum> Quantifier 1 ..* connective Prefix { disjoint, complete } Predicate Symbol functor Non-Functional Term Functional Constant Variable Function Ground Non-Ground And Formula Compound Constraint Primitive 1 ..* Constraint Symbol <<enum>> FOL Connective Constraint Domain 1 ..* Valuation Variable Assignment 1 ..*

9 CSP Domains: MOF Metamodel
Mixed CD 2 .. * Constraint Domain (CD) =, , true, false Symbolic CD Numeric CD {non-overlapping, complete} Finite CD Infinite CD {non-overlapping, complete} ,  Infinite Symbolic CD Integer FD Equations Inequalities N Symbolic FD Real Equations Inequalities ,  +, -, *, /, ^, logn, sin, cos, ... R Nominal FD Ordinal FD ,  Real Polynomial Equations Inequalities +, -, *, ^ Real Linear Equations Inequalities +, -, * String CD concat Boolean CD , , , ~, ,  {0,1} Rational Trees CD Function Symbol name: String 1 .. * Integer Linear Equations Inequalities N

10 CSP Solving Services Substitution Satisfaction Absolute Implication
Absolute Equivalence Normalization Absolute Simplification Projection Relative Implication Relative Equivalence Relative Simplification Local Propagation Optimization Labeling

11 CSP Solving Services: Substitution
Substitute(:Valuation, C:CompoundConstraint):CompoundConstraint returns result of substituting in C the variables in  by their value in . Examples: C: B = P + I x P  B2 = B + I x B 1: B =  P =  I = 0.2  B2 = 1440 1(C): 1200 = x  = x 1200 2: B = 1  I = 1 2(C): 1 = P + 1 x P  B2 = x 1 3: B = 1  P = 0  I = 1  B2 = 1 3(C): 1 = x 0  1 = x 1

12 CSP Solving Services: Satisfaction
Satisfiable(C:CompoundConstraint):Boolean result = true iff :Valuation | Substitute(, C) holds if result = true, also returns . Examples: C1: B = P + I x P  B2 = B + I x B Satisfiable(C1) = true, since: 1 = (B =  P =  I = 0.2  B2 = 1440) 1(C1) = (1200 = x  = x 1200)  (1200 = /100  = /100)  (1200 =  = )  (1200 =  = 1440)  (true  true)  true C2: X = Y + 1  Y = X + 1 Satisfiable(C2) = false, since C2  (X = X  Y = X + 1)  (X = X + 2  Y = X + 1)  (0 = 2  Y = X + 1)  (false  Y = X + 1)  false

13 CSP Solving Services: Absolute Implication and Equivalence
Implies(C1:CompoundConstraint, C2:CompoundConstraint):Boolean result = true iff :Valuation, (C1) satisfiable  (C2) satisfiable Examples: C1 = (TS  0  TA  TS + 7  TB  TA + 4  TC  TA + 3  TD  TC + 2  TE  TB + 2  TE  TC + 3  TE  TD + 3) C2 = TB  TC Implies(C1,C2) = false Since  = (TS = 0  TA = 7  TB = 11  TC = 12  TD = 14  TE = 17) satisfies C1 but not C2 C3 = C1  TE = 15 Implies(C3,C2) = true Since C3  12  TD  10  TC  TA  7  TB  11 Equivalent(C1:CompoundConstraint, C2:CompoundConstraint): Boolean result = true iff :Valuation, (C1) satisfiable  (C2) satisfiable C1  C2 iff (C1  C2) and (C2  C1)

14 CSP Solving Services: Normalization
Solved form compound constraint: X1 = e1  ...  XN = eN such that none of the variables X1 ... XN occur in any of the expressions e1 ... eN. Normalize(C:CompoundConstraint):CompoundConstraint result = S is in solved form and verifies S  C  C Examples: C = (X = 2 + Y  2*Y + X – T = Z  X + Y = 4  Z + T = 5) S = Normalize(C) = (X = 3  Y = 1  Z = 5 – T) C  (X = 2 + Y  2*Y Y – T = Z  2 + Y + Y = 4  Z = 5 - T)  (X = 2 + Y  3*Y + 2 – T = 5 - T  2*Y =  Z = 5 - T)  (X = 2 + Y  3*Y + 2 = 5  Y = 1  Z = 5 - T)  (X =  3*1 + 2 = 5  Y = 1  Z = 5 - T)  (X = 3  5 = 5  Y = 1  Z = 5 - T)  (X = 3  true  Y = 1  Z = 5 - T)  (X = 3  Y = 1  Z = 5 - T) = S

15 CSP Solving Services: Absolute Simplification
Simplify(C:CompoundConstraint):CompoundConstraint result = S is equivalent, simpler constraint, i.e., S  C and S has fewer primitive constraints than C and/or S has more constraints in solved form than C and/or S has fewer function symbols than C and/or S has fewer variables than C Examples: C = (X  Y + Z  U + V  X + V  U = Z + Y  V + V = 0  {U,V,X,Y,Z}  N) S = (X = Y + Z  U = Z + Y  V = 0  {U,V,X,Y,Z}  N) Since C  (X  Y + Z  U + V  X + V  U = Z + Y  V = 0  {U,V,X,Y,Z}  N)  (X  Y + Z  0 + U  X + 0  U = Z + Y  {U,V,X,Y,Z}  N  V = 0)  (X  Y + Z  U  X  U = Z + Y  {U,V,X,Y,Z}  N  V = 0)  (X  Y + Z  Z + Y  X  U = Z + Y  V = 0  {U,X,Y,Z}  N)  S

16 CSP Solving Services: Projection
Valuation extension: Given a valuation B of the form (X1 = v1 ... XB = vB) Any valuation E of the form (X1 = v1 ... XB = vB  XB+1 = vB+1 ... XE = vE ) is an extension of B. Partial solution: A valuation P is a partial solution of a constraint C iff F :Valuation, F extends P and P is a solution of C Notation: vars(C) = set of variables occurring in constraint C Project(C:CompoundConstraint,Vs:VariableSet):CompoundConstraint precondition: Vs  vars(C) result P verifies: vars(P)  Vs C  P P :Valuation over Vs, P solution of P  P partial solution of C

17 Projection Examples Y - Y X - X 1 - 1
C1 = (X  Y  Y  Z  Z  T  T  0) Project(C1,{X}) = X  0 C2 = (f(Y,Y) = f(X,Z)  s(Z) = s(T)  f bijection) Project(C2,{X,Z}) = (X = Z) C3 = (X + Y  1  X - Y  1  - X + Y  1  - X - Y  1) Project(C3,{X}) = (- 1  X  X  1) Counter-example: C4 = (X = f(Y,Z)) Project(C4,{X}) = fail there is no primitive constraint in C1 that either do not contain X or can be simplified X - X 1 - 1

18 CSP Solving Services: Local Propagation
Determined solved form of compound constraint: X1 = v1  ...  XN = vN where X1 ... XN are variables and v1 ... vN are constants Propagate(Cd:CompoundConstraint, C:CompoundConstraint):CompoundConstraint preconditions: Cd sub-conjunction of C Cd in determined solved form result = Propagate(Cd(C),choose(Cd’,determines(Cd,C))), i.e., apply Cd as valuation substitution on C find which other sub-conjunctions of C become determined by this substitution -- determines(Cd,C) choose one of them Cd’ recursively propagate Cd’ on Cd(C) stop when propagation fails to determine new member of C

19 Local Propagation Example
C = (V = V1  V = V2  V1 = I1 x R1  V2 = I2 x R2  I = I1 + I2) Cd = (V = 10  R1 = 5  R2 = 10) Propagate(Cd,C) = (V = 10  V1 = 10  V2 = 10  I1 = 2  I2 = 1  I = 3) Since: Cd(C) = (10 = V1  10 = V2  V1 = I1 x 5  V2 = I2 x 10  I = I1 + I2) Cd’ = (V1 = 10  V2 = 10) Cd’(Cd(C)) = (10 = 10  10 = 10  10 = I1 x 5  10 = I2 x 10  I = I1 + I2) Cd’’ = (I1 = 2  I2 = 1) Cd’’(Cd’(Cd(C))) = (10 = V1  10 = V2  10 = 2 x 5  10 = 1 x 10  I = 2 + 1)

20 CSP Solving Services: Optimization
Optimize(C:CompoundConstraint, F:CostFunction):Valuation if C overconstrained result = fail if C just rightly constrained result = unique u such that u(C) satisfiable if C underconstrained result = one of the lowest-cost solutions, i.e., o such that o(C) satisfiable and  such that (C) satisfiable, F(o)  F() if there is no such lower-cost solution, result = none Examples: C1 = (X + Y  4) F(X,Y) = X^2 + Y^2 Optimize(C1,F) = (X = 2  Y = 3) G(X,Y) = X + Y Optimize(C1,G) = any solution to C2 = (X + Y = 5) C3 = X  0 H(X) = X Optimize(C3,H) = none

21 CSP Solving Services: Labeling
Label(C:CompoundConstraint):ValuationSet precondition: C over finite domain result = {:Valuation | (C) satisfiable} Example: C1 = (WT  SO  WT  NT  NT  SO  NT  Q  Q  SO  Q  NSW  NSW  SO  NSW  V  V  SO) C2 = (WT  {blue, green, red}  SO  {blue, green, red}  NT  {blue, green, red}  Q  {blue, green, red}  NSW  {blue, green, red}  V  {blue, green, red}  T  {blue, green, red}) Label(C1  C2) = (SO = blue  WA = red  NT = green  Q = red  NSW = green  V = red  T = green)

22 Constraint Solvers Constraint solver: software providing one CSP service Many CSP services can be implemented through judicious assembly and reuse of other CSP services Properties: Correct Complete Normalizing Set-based Variable name independent Monotonic (falsity preserving) Projecting Weakly projecting

23 Constraint Solvers Properties
Correct: guaranteed to return only correct solutions Complete: guaranteed to return all existing solutions Possible only for small instances of CSP over specific domains Most CSP are NP-Hard, some are semi-decidable or even undecidable Satisfiable service returns “unknown” when it can neither conclude that the input constraint is satisfiable nor that it is unsatisfiable Normalizing: return results in solved form Directly legible, usable result No need for simplification or projection post-processing Set-based: returns same solution for two equivalent compound constraints differing only in primitive constraint order and/or repetitions Variable-name independent: returns same solution for two equivalent compound constraints different only in terms of variable names Monotonic: C1,C2 satisfies(C1) = false  satisfies(C1  C2) = false

24 Soft Constraints Define preference order over valuations consistent with hard constraints Hard constraint example: a professor cannot teach two courses with overlapping time slots Soft constraint example: a professor prefers its undergraduate and graduate course to be scheduled on the same day Most satisfiability problems with hard and soft constraints can be transformed into optimization problems: The preference defined by the soft constraints is captured by the cost function to optimize

25 Primitive Constraint Arity
Arity: primitive constraint argument number Zero-ary: true, false Unary: boolean negation, =, , , , ,  with one variable and one constant Binary: =, , , , ,  with two variables Primitive high-order FD constraints: Alldiff(V1 D, ... , Vn D), no pair of variables from {V1, ... , Vn} can share the same value in finite domain D Atmost(TD,V1 D, ... , Vn D), T  V1 +, ... , + Vn Element(I {1, ... ,n}, [V1, ... , Vn], X), if I = i, then X = Vi Any primitive high-order FD constraints can be converted to an equivalent conjunction of binary primitive FD constraints by the introduction of additional, auxiliary variables But, special-purpose propagation techniques handle primitive high-order FD constraints far more efficiently than general purpose propagation techniques can handle their conversion as a conjunction of binary constraints

26 CSP Domains and Algorithms
Chronological Backtracking (CBT) Simple, w/ Forward Checking Conflict-Directed Backjunping (CDBJ) k-Consistency Propagation CBT w/ k-Consistency Propagation CDBJ w/ k-Consistency Propagation Min-Conflict Local Propagation Constraint Domain (CD) Symbolic CD Numeric CD Finite CD Infinite CD Real Equations Inequalities Symbolic FD Integer FD Linear Equations Inequalities Bounds Consistency Propagation (BCP) CBT w/ BCP CDBJ w/ BCP Real Polynomial Equations Inequalities Simplex Optimization Gauss-Jordan Elimination Interval FD Real Linear Equations Ordinal FD Real Linear Equations Inequalities String CD Rational Trees CD Real Linear Inequalities Nominal FD Fourier Elimination Integer Linear Equations Inequalities Infinite Symbolic CD Boolean CD Unification

27 Search Agents Generic decision problem to be solved by an agent:
Among all possible action sequences that I can execute, which ones will result in changing the environment from its current state, to another state that matches my goal? Additional optimization problem: Among those action sequences that will change the environment from its current state to a goal state which one can be executed at minimum cost? or which one leads to a state with maximum utility? Search agent solves this decision problem by a generate and test approach: Given the environment model, generate one by one (all) the possible states (the state space) of the environment reachable through all possible action sequences from the current state, test each generated state to determine whether it satisfies the goal or maximize utility Navigation metaphor: The order of state generation is viewed as navigating the entire environment state space

28 Off-Line Goal-Based Search Agent
Environment Agent P Percept Interpretation: Environment Initialization Sensors Action Sequence Effect Prediction: Search Algorithm E: EnvironmentModel goalTest():Boolean S: ActionSequence S Effectors

29 Off-Line Utility-Based Search Agent
Environment Agent P Percept Interpretation: Environment Initialization Sensors Action Sequence Effect Prediction: Search Algorithm E: EnvironmentModel utility():Number S: ActionSequence S Effectors

30 On-Line Goal-Based Search Agent
Environment Agent P Percept Interpretation: Environment Update Sensors Single Action Effect Prediction: Search Algorithm E: EnvironmentModel goalTest():Boolean A Effectors

31 On-Line Utility-Based Search Agent
Environment Agent P Percept Interpretation: Environment Update Sensors Single Action Effect Prediction: Search Algorithm E: EnvironmentModel utility():Number A Effectors

32 Example of Natural Born Search Problem

33 Example of Applying Navigation Metaphor to Arbitrary Decision Problem
N-Queens problem: how to arrange n queens on a NxN chess board in a pacific configuration where no queen can attack another? Navigation Metaphor Problem Algorithm Full-State Formulation: local navigation in the full-State Space Q Move Queen Partial-State Formulation: global navigation in partial-State Space Q Insert Queen

34 Search Problem Taxonomy
Full-State Space Search Problem initState: FullState typicalState: FullState action: ModifyStateAction Partial-State Space initialState: EmptyState typicalState: PartialState goalState: FullState action: RefineStateAction disjoint, complete overlapping, complete Goal-Satisfaction Search Problem Test: GoalSatisfactionTest Optimization Test: utilityMaximizationTest disjoint, complete State Finding Search Problem solution: State Path Finding solution: Path disjoint, complete Off-Line Search Problem solution: Sequence(Action) On-Line solution: Action Fully Informed Search Problem sensor: CompletePerfectSensor action: DeterministicAction envtModel: CompleteEnvModel Sensorless sensor: MissingSensor envModel: NonMissing Contingency sensor: PartialSensor envModel: CompleteEnvModel Exploration envModel: Missing disjoint, incomplete

35 Search Graphs and SearchTrees
The state space can be represented as a search graph or a search tree Each search graph or tree node represents an environment state Each search graph or tree arc represents an action changing the environment from its source node to its target node Each node or arc can be associated to a utility or cost Each path from one node to another represents an action sequence In search tree, the root node represents the initial state and some leaves represent goal states The problem of navigating the state space then becomes one of generating and maintaining a search graph or tree until a solution node or path is generated The problem search graph or tree: An abstract concept that contains one node for each possible environment state Can be infinite The algorithm search graph or tree: A concrete data structure A subset of the problem search graph or tree that contains only the nodes generated and maintained (i.e., explored) at the current point during the algorithm execution (always finite) Problem search graph or tree branching factor: average number of actions available to the agent in each state Algorithm effective branching factor: average number of nodes effectively generated as successors of each node during search

36 Search Graph and Tree Examples: Vacuum Cleaner World
c,d c,d d c d s s s u c s u d u d u d s c s d c d c,d c,d d s s d d d d c,d d u s d c,d d c d d c,d u s d u s d u s d c,d d c d d c,d c d c d c,d c,d d d c d c,d c,d d c d d c,d c d c d d c,d c,d d d c d c,d c d c d d c,d c d c d d c,d c d c c,d c,d d c d d c,d c,d d c d c c,d d d c d c,d

37 Search Methods Searching the entire space of the action sequence reachable environment states is called exhaustive search, systematic search, blind search or uninformed search Searching a restricted subset of that state space based on knowledge about the specific characteristics of the problem or problem class is called partial search A heuristic is an insight or approximate knowledge about a problem class or problem class family on which a search algorithm can rely to improve its run time and/or space requirement An ordering heuristic defines: In which order to generate the problem search tree nodes, Where to go next while navigating the state space (to get closer faster to a solution point) A pruning heuristic defines: Which branches of the problem search tree to avoid generating alltogether, What subspaces of the state space not to explore (for they cannot or are very unlikely to contain a solution point) Non-heuristic, exhaustive search is not scalable to large problem instances (worst-case exponential in time and/or space) Heuristic, partial search offers no warrantee to find a solution if one exist, or to find the best solution if several exists.

38 Formulating a Agent Decision Problem as a Search Problem
Define abstract format of a generic environment state, ex., a class C Define the initial state, ex., a specific object of class C Define the successor operation: Takes as input a state or state set S and an action A Returns the state or state set R resulting from the agent executing A in S Together, these 3 elements constitute an intentional representation of the state space The search algorithm transforms this intentional representation into an extensional one, by repeatedly applying the successor operation starting from the initial state Define a boolean operation testing whether a state is a goal, ex., a method of C For optimization problems: additionally define a operation that returns the cost or utility of an action or state

39 Problem Formulation Most Crucial Factor of Search Efficiency
Problem formulation is more crucial than choice of search algorithm or choice of heuristics to make an agent decision problem effectively solvable by state space search 8-queens problem formulation 1: Initial state: empty board Action: pick column and line of one queen Branching factor: 64 State-space: ~648 8-queens problem formulation 2: Action: pre-assign one column per queen, pick only line in pre-assigned column Branching factor: 8 State-space: ~88

40 Generic Exhaustive Search Algorithm
Initialize the fringe to the root node representing the initial state Until goal node found in fringe, repeat: Choose one node from fringe to expand by calling its successor operation Extend the current fringe with the nodes generated by this successor operation If optimization problem, update path cost or utility value Return goal node or path from root node to goal node Specific algorithms differ in terms of the order in which they respectively expand the fringe nodes fringe Arad Sibiu Timisoara Zenrid Arad Fagaras Oradea R.Vilcea Arad Lugoj Arad Oradea

41 Generic Exhaustive Search Algorithm
Initialize the fringe to the root node representing the initial state Until goal node found in fringe, repeat: Choose one node from fringe to expand by calling its successor operation Extend the current fringe with the nodes generated by this successor operation If optimization problem, update path cost or utility value Return goal node or path from root node to goal node Specific algorithms differ in terms of the order in which they respectively expand the fringe nodes Arad fringe Sibiu Timisoara Zenrid Arad Fagaras Oradea R.Vilcea Arad Lugoj Arad Oradea

42 Generic Exhaustive Search Algorithm
Initialize the fringe to the root node representing the initial state Until goal node found in fringe, repeat: Choose one node from fringe to expand by calling its successor operation Extend the current fringe with the nodes generated by this successor operation If optimization problem, update path cost or utility value Return goal node or path from root node to goal node Specific algorithms differ in terms of the order in which they respectively expand the fringe nodes Arad open-list fringe Sibiu Timisoara Zenrid Arad Fagaras Oradea R.Vilcea Arad Lugoj Arad Oradea

43 Search Algorithms Characteristics and Performance
Complete: guaranteed to find a solution if one exists Optimal (for optimization problem): guaranteed to find the best (highest utility or lowest cost) solution if one exists Input parameters to complexity metrics: b = problem search tree branching factor d = depth of highest solution (or best solution for optimization problems) in problem search tree m = problem search tree depth (can be infinite) Complexity metrics of algorithms: TimeComplexity(b,d,m) = number of expanded nodes SpaceComplexity(b,d,m) = maximum number of nodes needed in memory at one point during the execution of the algorithm

44 Exhaustive Search Algorithms
Breadth-First: Expand first most shallow node from fringe Uniform Cost: Expand first node from fringe of lowest cost (or highest utility) path from the root node Depth-First: Expand first deepest node from fringe Backtracking: Depth first variant with fringe limited to a single node Depth-Limited: Depth-first stopping at depth limit N. Iterative Deepening: Sequence of depth limited search at increasing depth Bi-directional: Parallel search from initial state and from goal state Solution found when the two paths under construction intersect

45 Breadth-First Search Fringe A B C D E F G H I J K L M N O

46 Breadth-First Search Fringe Expanded A B C D E F G H I J K L M N O

47 Breadth-First Search Fringe Expanded A B C D E F G H I J K L M N O

48 Breadth-First Search Fringe Expanded A B C D E F G H I J K L M N O

49 Breadth-First Search Fringe Expanded A B C D E F G H I J K L M N O

50 Breadth-First Search Fringe Expanded A B C D E F G H I J K L M N O

51 Breadth-First Search Fringe Expanded A B C D E F G H I J K L M N O

52 Breadth-First Search Fringe Expanded A B C D E F G H I J K L M N O

53 Uniform Cost Search B 1 10 5 5 Problem Graph: A C E 15 5 D A A B C D 1
11 A B C D 5 15 EB A B C D 11 10 15 EB Ec

54 Depth-First Search A B C D E F G H I J K L M N O

55 Depth-First Search A B C D E F G H I J K L M N O

56 Depth-First Search A B C D E F G H I J K L M N O

57 Depth-First Search A B C D E F G H I J K L M N O

58 Depth-First Search A B C D E F G H I J K L M N O

59 Depth-First Search A B C D E F G H I J K L M N O

60 Depth-First Search A B C D E F G H I J K L M N O

61 Depth-First Search A B C D E F G H I J K L M N O

62 Depth-First Search A B C D E F G H I J K L M N O

63 Backtracking Search A B C D E F G H I J K L M N O

64 Backtracking Search A B C D E F G H I J K L M N O

65 Depth-First Search A B C D E F G H I J K L M N O

66 Backtracking Search A B C D E F G H I J K L M N O

67 Backtracking Search A B C D E F G I J K L M N O

68 Backtracking Search A B C E F G J K L M N O

69 Backtracking Search A B C E F G J K L M N O

70 Backtracking Search A B C E F G K L M N O

71 Backtracking Search A C F G L M N O

72 Iterative Deepening L = 0 L = 1 L = 2 A A A A B C A B C A B C A A B C

73 Comparing Search Algorithms
Breadth-First Uniform-Cost Depth-First Back-tracking Iterative Deepening Complete if b finite if all step costs positives no Optimal if all steps share same cost yes Time Complexity O(bd+1) O(bC*/e) O(bm) O(bd) Space Complexity O(b.m) O(m) O(b.d) C* = cost of optimal solution a  actions(agent), cost(a)  e

74 Heuristic Search: Definition and Motivation
Non-exhaustive, partial state space search strategy, based on approximate heuristic knowledge of the search problem class (ex, n-queens, Romania touring) or family (ex, unordered finite domain constraint solving) allowing to leave unexplored (prune) state space zones that are either guaranteed or unlikely to contain a goal state (or a utility maximizing state or cost minimizing path) Note: An algorithm that uses a fringe node ordering heuristic to generate a goal state faster, but is still ready to generate all state space states if necessary to find goal state i.e., an algorithm that does no pruning is not a heuristic search algorithm Motivation: exhaustive search algorithms do not scale up, neither theoretically (exponential worst case time or space complexity) nor empirically (experimentally measured average case time or space complexity) Heuristic search algorithms do scale up to very large problem instances, in some cases by giving up completeness and/or optimality New data structure: heuristic function h(s) estimates the cost of the path from a fringe state s to a goal state

75 Best-First Global Search
Keep all expanded states on the fringe just as exhaustive breadth-first search and uniform-cost search Define an evaluation function f(s) that maps each state onto a number Expand the fringe in order of decreasing f(s) values Variations: Greedy Global Search (also called Greedy Best-First Search) defines f(s) = h(s) A* defines f(s) = g(s) + h(s) where g(s) is the real cost from the initial state to the state s, i.e., the value used to choose the state to expand in uniform cost search

76 A* Example h(s): straight-line distance to Bucharest: 336 + 160 449
220 239 417 393 366 317 447 413 455 418 415 496

77 A* Search Characteristics
Strengths: Graph A* search is complete and Tree A* search is complete and optimal if h(s) is an admissible heuristic, i.e., if it never overestimates the real cost to a goal Graph A* search if optimal if h(s) admissible and in addition a monotonic (or consistent) heuristic h(s) is monotonic iff it satisfies the triangle inequality, i.e., s,s’  stateSpace (a  actions, s’ = result(a,s))  h(s)  cost(a) + h(s´) A* is optimally efficient, i.e., no other optimal algorithm will expand fewer nodes than A* using the same heuristic function h(s) Weakness: Runs out of memory for large problem instance because it keeps all generated nodes in memory Why? Worst-case space complexity = O(bd), Unless n, |h(n) – c*(n)|  O(log c*(n)) But very few practical heuristics verify this property

78 StateSpaceSearchPb +fullStateFornulation; Boolean +suc(State,AgentAction):State <<interface>> StateSpaceSearch +gSearch(StateSpaceSearchPb):SearchSolution <<component>> StateSpaceSearch +gSearch(StateSpaceSearchPb):SearchSolution 1..* AgentAction +name:String +cost:Real <<uses>> <<uses>> <<uses>> <<uses>> <<interface>> BtStrategy +bt(Node):Node <<interface>> ExpandStrategy +choose(Fringe):Node <<interface>> Cost2GoalHeuristic +estimCost2Goal(Node):Real <<interface>> PruningHeuristic +prune(Node):Node[*] State +full:Boolean +goal:Boolean +initial:Boolean 2..* <<component>> BtStrategy +bt(Node):Node <<component>> ExpandStrategy +choose(Fringe):Node <<component>> Cost2GoalHeuristic +estimCost2Goal(Node):Real <<component>> PruningHeuristic +prune(Node):Node[*] parent models child * Node +/expanded: Boolean +/root: Boolean +/visited: Integer Fringe * NodeSolution High-Level Design of an Object-Oriented Framework for Problem Solving as Search * {ordered} SearchSolution Path +/cost:Real PathSolution

79 FD Constraint Satisfaction as Search
FD CSP: D = {v1, ... , vm}  X1  D  ...  Xn  D  Cs where Cs = c1(Xi1,Xj1)  ...  cp(Xip,Xjp) Finite domain allows solving by enumeration of possible valuations (search) Formulation as incremental search: Initial state: no variable assigned, i.e., X1  D  ...  Xn  D  Cs Successor function: add Xi = vj to current valuation Xk = vl  ...  Xq = vr such that Xk = vl  ...  Xq = vr  Xi = vj  Cs satisfiable Goal test: X1 = vi1  ...  Xn = vin  Cs satisfiable Path cost: 1 per step Formulation as complete-state search: Initial state: all variables randomly assigned, i.e., X1 = vi1  ...  Xk = vik  ...  Xn = vin Successor function: change assignment of one variable in current valuation resulting in X1 = vi1  ...  Xk = vil  X1 = vi1  ...  Xn = vin where vil  vik. Goal test: same as incremental search

80 General Problem-Solving Search vs. FD CSP Search
State representation: Problem-specific Black-box data structure Successor function: Arbitrary black-box Goal test function: Heuristic functions: All domain-specific FD CSP search: State representation: Standard for all CSP Compositional logical knowledge representation language Successor function: Instantiation of one piece of intentional knowledge into extensional knowledge Goal test function: Satisfaction of all instantiated primitive constraints Heuristic functions: Some domain-independent!

81 FD CSP Graphs Summarize dependencies between variables through constraints Node: variable, or constraint Arc: constraint, or constraint argument Useful for: Computing FD CSP search heuristic functions Complexity analysis of FD CSP search algorithm

82 Backtracking Search for FD CSP
General algorithm: At each step, choose one variable to assign one value to it and choose that value from the variable’s associated domain If the resulting partial valuation satisfies all the primitive constraints, recur to choose next variable-value assignment pair Otherwise, backtrack to a earlier assignment pair choice, choose an alternative pair and resume forward search from that point Notes: Path to solution state irrelevant Base, uninformed version: Chooses variable to assign randomly among remaining options Chooses value to assign randomly among remaining options Always backtracks to last choice point (chronological backtracking) Does not perform any pruning of future options based on the propagation of the consequences of its last assignment Xi = vi to the domains of variables adjacent to Xi in the constraint graph

83 Chronological Backtracking Example
(X1=r  X1=b  X1=g)  (X2=b  X2=g)  (X3=r  X3=b)  (X4=r  X4=b)  (X5=b  X5=g)  (X6=r  X6=g  X6=t)  (X7=r  X7=b)  (X1=X2)  (X1=X3)  (X1=X4)  (X1=X7)  (X2=X6)  (X3=X7)  (X4=X5)  (X4=X7)  (X5=X6)  (X5=X7)

84 Chronological Backtracking Example

85 Chronological Backtracking Example

86 Chronological Backtracking Example

87 Chronological Backtracking Example

88 Chronological Backtracking Example

89 Chronological Backtracking Example

90 Chronological Backtracking Example
bt X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g

91 Chronological Backtracking Example
bt x X2 X6 X2 X6 X2 X6 b g r g t b g r g t b g r g t X3 X4 X5 X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g r b r b b g

92 Chronological Backtracking Example
bt x X2 X6 b g r g t bt X3 X4 X5 r b r b b g

93 Chronological Backtracking Example
bt x X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 bt x r b r b b g r b r b b g

94 Chronological Backtracking Example
bt X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 bt r b r b b g r b r b b g r b g r b X1 X7 X2 X6 b g r g t X3 X4 X5 x r b r b b g bt

95 Chronological Backtracking Example
bt X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 bt r b r b b g r b r b b g r b g r b r b g r b X1 X7 X1 X7 X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 x r b r b b g r b r b b g bt

96 Chronological Backtracking Example
bt X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 bt r b r b b g r b r b b g r b g r b r b g r b r b g r b X1 X7 X1 X7 X1 X7 x X2 X6 X2 X6 X2 X6 b g r g t b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g r b r b b g bt

97 Chronological Backtracking Example
bt X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 bt r b r b b g r b r b b g r b g r b r b g r b r b g r b X1 X7 X1 X7 X1 X7 x X2 X6 X2 X6 X2 X6 b g r g t b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g r b r b b g bt r b g r b X1 X7 X2 X6 b g r g t X3 X4 X5 r b r b b g

98 Chronological Backtracking Example
bt X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 bt r b r b b g r b r b b g r b g r b r b g r b r b g r b X1 X7 X1 X7 X1 X7 x X2 X6 X2 X6 X2 X6 b g r g t b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g r b r b b g bt r b g r b r b g r b X1 X7 X1 X7 X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g

99 Chronological Backtracking Example
bt X2 X6 X2 X6 b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 bt r b r b b g r b r b b g r b g r b r b g r b r b g r b X1 X7 X1 X7 X1 X7 x X2 X6 X2 X6 X2 X6 b g r g t b g r g t b g r g t bt X3 X4 X5 X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g r b r b b g bt r b g r b r b g r b r b g r b X1 X7 X1 X7 X1 X7 X2 X6 X2 X6 X2 X6 b g r g t b g r g t b g r g t X3 X4 X5 X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g r b r b b g

100 Chronological Backtracking Example

101 Chronological Backtracking Example
bt x X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g

102 Chronological Backtracking Example
... bt x X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g x r b g r b X1 X7 X2 X6 b g r g t X3 X4 X5 r b r b b g

103 Chronological Backtracking Example
... bt x X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g x r b g r b r b g r b X1 X7 X1 X7 X2 X6 b g r g t X2 X6 b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g

104 Chronological Backtracking Example
... bt x X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g x r b g r b r b g r b X1 X7 X1 X7 ... X2 X6 b g r g t X2 X6 b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g r b g r b X1 X7 bt X2 X6 b g r g t X3 X4 X5 r b r b b g

105 Chronological Backtracking Example
... bt x X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g x r b g r b r b g r b X1 X7 X1 X7 ... X2 X6 b g r g t X2 X6 b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g x x r b g r b r b g r b X1 X7 ... X1 X7 bt X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g

106 Chronological Backtracking Example
... X1 X7 X2 X6 X2 X6 b g r g t b g r g t X3 X4 X5 X3 X4 X5 r b r b b g r b r b b g

107 Heuristic Improvements of Backtracking Search for FD CSP
How to choose next variable to assign at each forward step? How to choose next value to assign to that variable? Where to backtrack when current valuation fails? What to record when current valuation fails to avoid repeating following such unsuccessful path in the future? Domain-independent, general-purpose heuristics for each decision Whether or not to combine it with consistency-based constraint propagation to prune the domains of the values Such propagation can be done either as a pre-processing step Or after each variable assignment

108 Specializing the General Search Framework for CSP
CSPSearchProblem SearchProblem FullStateFormulation SearchProblem PartialStateFormulation FDCSP FDCSPSearchProblem FullStateFormulation SearchAlgo GlobalSearchAlgo LocalSearchAlgo CSPSearchAlgo FDCSP GlobalFDCSP LocalFDCSP CDBJ Min-Confllict Backtracking Heuristic VariableChoice ValueChoice

109 FD CSP BT Search Forward Phase Heuristics
Next variable choice: Most constrained by current partial valuation a.k.a., MRV (Minimum Remaining Values) heuristic Why? Speed-ups failure detection, avoids BT in hopeless search space regions Most constraining to currently unassigned variables a.k.a., Highest Degree (HD) heuristic Why? Reduces future effective branching factor Common combination: HD as tie breaker for MRV Next value choice: Least Constraining Value (LCV) Why? Leaves options open, avoids BT triggered by early commitment with insufficient knowledge Instance of general AI heuristic of “least-commitment”

110 Example Heuristic Combination: HD tie-broken by MRV and LCV
HD and LCV at pre-processing time: X1 = r: 3 conflicts, X1 = b: 4 conflicts, X1 = g: 1 conflict, total = 8 LCV(X1) = g, r, b X2 = b: 1 conflict, X2 = g: 2 conflicts, total = 3 LCV(X2) = b, g X3 = r: 2 conflicts, X3 = b: 2 conflicts, total = 4 LCV(X3) = r, b X4 = r: 2 conflicts, X4 = b: 3 conflicts, total = 5 LCV(X4) = r, b X5 = b: 2 conflicts, X5 = g: 1 conflicts, total = 3 LCV(X5) = g, b X6 = r: 0 conflict, X6 = g: 2 conflicts, X6 = t: 0 conflicts, total = 2 LCV(X6) = g, r, t X7 = r: 3 conflicts, X7 = b: 4 conflicts, total = 7 LCV(X6) = r, b HD ordering = X1, X7, X4, X3, {X2, X5}, X6 MRV at run-time to tie-break between X2 and X5 Improves scalability from 25-Queens for uninformed BTto 1000-Queens X2 b g X6 r t X3 X4 X5 X1 X7 X25 b g X67 r t X34 X43 X55 X11 X72

111 FD CSP BT Search with Forward Checking
In forward phase After each new variable assignment Xi = vi Add step to delete vi from the domains of adjacent(Xi) in the constraint graph

112 FD CSP BT Search with Forward Checking
In forward phase After each new variable assignment Xi = vi Add step to delete vi from the domains of adjacent(Xi) in the constraint graph

113 FD CSP BT Search with Forward Checking
In forward phase After each new variable assignment Xi = vi Add step to delete vi from the domains of adjacent(Xi) in the constraint graph

114 FD CSP BT Search with Forward Checking
In forward phase After each new variable assignment Xi = vi Add step to delete vi from the domains of adjacent(Xi) in the constraint graph

115 FD CSP BT Search with Forward Checking
Forward checking does not provide early detection for all failures After 3 steps, domains of adjacent variables NT and SA are both reduced to {blue} which leads to failure Systematic early detection requires multi-step constraint propagation after each assignment

116 k-Consistency CSP P1 = (D = {v1, ... , vm}  X1  D  ...  Xn  D  Cs) where Cs is a compound constraint on X1 ... Xn CSP P2 is a sub-problem of P1 iff it is of the form: D = {v1, ... , vm}  X1  D  ...  Xn-1  D  Cs k-Consistency: An FD CSP is k-consistent iff any consistent partial valuation involving k-1 variables can be extended into a consistent valuation assigning anyone of the remaining unassigned variables 1-Consistency a.k.a. Node Consistency Every variable has a consistent assignment in any non-empty sub-domain of D 2-Consistency a.k.a. Arc Consistency Every consistent single variable assignment can be extended into a consistent variable assignment pair for any other variable 3-Consistency a.ka. Path Consistency Every consistent variable assignment pair can be extended into a consistent variable assignment triple for any third variable Strong k-Consistency: An FD CSP is strongly k-consistent iff it is k-consistent, k-1 consistent, ... path-consistent, arc-consistent and node-consistent

117 k-Consistency Examples
CSP1: X  Y  Y  Z  Z  2  X  D  Y  D  Z  D  D = {1,2,3,4} is not node consistent Primitive constraint Z  2 rules out any consistent assignment for Z over sub-domain {3,4} CSP2: X  Y  Y  Z  Z  2  X  D  Y  D  Z  D  D = {1,2} is node consistent CSP2 is not arc-consistent Y = 1  X  Y  X  {1,2} is unsatisfiable Australia map coloring problem with two colors is not globally satisfiable but still arc-consistent

118 Node and Arc-Consistency Propagation
Node consistency: For each unary constraints U(X) Delete all the values from Domain(X) that violate U Arc-consistency: For each binary constraint B(X,Y) Delete all the values from Domain(X) and Domain(Y) that violates the arc-consistency of the CSP

119 Node and Arc Consistency Example
Colouring Australia: with constraints WA NT SA Q NSW V T Node consistency

120 Node and Arc Consistency Example
Colouring Australia: with constraints WA NT SA Q NSW V T Arc consistency

121 Node and Arc Consistency Example
Colouring Australia: with constraints WA NT SA Q NSW V T Arc consistency

122 Node and Arc Consistency Example
Colouring Australia: with constraints WA NT SA Q NSW V T Arc consistency

123 Backjumping Backjumping or dependency-directed backtracking algorithms: improve on chronological backtracking: by attempting to backtrack directly to the deep cause of the failure up the proof tree. Cache and update additional constraint dependency information derived from constraint graph and current partial valuation; during both the forward and backward phases of the search.

124 Conflict Sets Given constraint graph G,
X2 b g X6 r t X3 X4 X5 X1 X7 Conflict Sets Given constraint graph G, a partial valuation A is a conflict set for variable X if VDom(X), CG, A  X=V  C |= false, i.e., all domain values of X are ruled out by some constraint involving A e.g., valuation: X1=r  X2=b  X3=b  X4=b  X5=g  X6=r is a conflict set for X7 since X7=r conflicts with X1=r  X1=X7 and X7=b conflicts with X3=b  X3=X7 and with X4=b  X4=X7 A conflict set is minimal if it does not contain any conflict subset e.g., valuations X1=r  X3=b and X1=r  X4=b are minimal conflict sets for X7 X1=r  X3=b is the Earliest Minimal Conflict Set (EMCS) for X7 with variable ordering [X1, …, X7] X2 X6 X3 X4 X5 X1 X7 r b X2 X6 X3 X4 X5 X1 X7 r b X2 X6 X3 X4 X5 X1 X7 r b

125 Dead-end Variables and Jump-back Sets
A dead-end valuation is partial valuation X1=r …  Xi=b that is a conflict set for the dead-end variable Xi+1 The jump-back set of a dead-end variable is its EMCS with the current variable ordering When failure occurs at a dead-end variable, Conflict-Directed Back-Jumping (CDBJ) backtracks directly (or jumps back) to the latest variable of its jump-back set with the current variable ordering Jump-Back Sets are Accumulated (AJBS) during the forward search phase as follow: When choosing Xi=vk add it to the conflict sets of all variables, Xjs such that C(…, Xi,…, Xj,…)G and Xi=vk  C |= false Jump-Back Sets are Updated (UJBS) during the backtracking phase as follows: After jumping back from Xd to Xb, UJBS(Xb) = UJBS(Xb)  (AJBS(Xd) – {Xb=vb})

126 Simple CDBJ Run Example
g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 dead-end X2 b g X6 r t X3 X4 X5 X1 X7 backjump x backjump dead-end X2 b g X6 r t X3 X4 X5 X1 X7

127 Simple CDBJ Run Example
g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x x X2 b g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 X1 X3 X2 b g X6 r t X4 X5 X7 dead-end x X2 b g X6 r t X3 X4 X5 X1 X7 backjump x X2 b g X6 r t X3 X4 X5 X1 X7 x dead-end

128 Simple CDBJ Run Example
g X6 r t X3 X4 X5 X1 X7 x backjump X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 dead-end x X2 b g X6 r t X3 X4 X5 X1 X7 backjump x

129 Simple CDBJ Run Example
g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 x X2 b g X6 r t X3 X4 X5 X1 X7 Finds first solution in 30 steps

130 CDBJ with Forward Checking
X2 b g X6 r t X3 X4 X5 X1 X7 X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X dead-end X2 b g X6 r t X3 X4 X5 X1 X7 X backjump dead-end X2 b g X6 r t X3 X4 X5 X1 X7 backjump X X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X dead-end

131 CDBJ with Forward Checking
X2 b g X6 r t X3 X4 X5 X1 X7 X backjump dead-end X2 b g X6 r t X3 X4 X5 X1 X7 X backjump X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X dead-end X2 b g X6 r t X3 X4 X5 X1 X7 X backjump X2 b g X6 r t X3 X4 X5 X1 X7 X

132 CDBJ with Forward Checking
X2 b g X6 r t X3 X4 X5 X1 X7 X X2 b g X6 r t X3 X4 X5 X1 X7 X Finds first solution in 20 steps

133 CDBJ w/ FC + HD, MRV and LCV Heuristics
X2 b g X6 r t X3 X4 X5 X1 X7 X25 b g X67 r t X34 X43 X55 X11 X72 X25 b g X67 r t X34 X43 X55 X11 X72 X1 X X25 b g X67 r t X34 X43 X55 X11 X72 X1 X X7 X25 b g X67 r t X34 X43 X55 X11 X72 X1 X X7 X4 X25 b g X67 r t X34 X43 X55 X11 X72 X1 X X7 X4 MRV(X2)= 0 MRV(X5)= 1 X25 b g X67 r t X34 X43 X56 X11 X72 X1 X X7 X4 X25 b g X67 r t X34 X43 X56 X11 X72 X1 X X7 X4 X5 X25 b g X67 r t X34 X43 X56 X11 X72 X1 X X7 X4 X5 7 steps No backtrack!

134 Min-Conflict Example States: 4 queens in 4 columns (44 = 256 states).
Actions: move queen in column. Goal test: no attacks. Evaluation: h(n) = number of attacks Given random initial state, can solve n-queens in almost constant time for arbitrary n with high probability (e.g., n = 10,000,000)

135 Performance Experiments
Problem Backtracking BT+MRV Forward Checking FC+MRV Minimum Conflicts USA (4-colors) (>1.000K) 2K 60 64 n-Queens (2 < n < 50) (>40.000K) 13.500K (>40000K) 817K 4K Zebra (ex. 5.13) 3.859K 1K 35K 0.5K Random 1 415K 3K 26K Not run Random 2 942K 27K 77K 15K


Download ppt "Constraint Solving: Problems, Domains and Search Methods"

Similar presentations


Ads by Google