Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Science CPSC 322 Lecture 7 Search Wrap Up, Intro to Constraint Satisfaction Problems 1.

Similar presentations


Presentation on theme: "Computer Science CPSC 322 Lecture 7 Search Wrap Up, Intro to Constraint Satisfaction Problems 1."— Presentation transcript:

1 Computer Science CPSC 322 Lecture 7 Search Wrap Up, Intro to Constraint Satisfaction Problems 1

2 Lecture Overview Recap of previous lecture Dynamic programming for search Other advanced search algorithms Intro to CSP (time permitting) 2

3 We showed that A* is optimal and complete 3 A* properties

4 We showed that A* is optimal and complete 4 A* properties Which of the following assumptions was not made in the result that A is admissible: A. Arc costs are bounded above 0 B. Branching factor is finite C. h(n) is an underestimate of the cost of the shortest path from n to a goal D. The costs around a cycle must sum to zero

5 We showed that A* is optimal and complete 5 A* properties Which of the following assumptions was not made in the result that A is admissible: A. Arc costs are bounded above 0 B. Branching factor is finite C. h(n) is an underestimate of the cost of the shortest path from n to a goal D. The costs around a cycle must sum to zero

6 Cycle Checking If we want to get rid of cycles, but we also want to be able to find multiple solutions Do cycle checking In DFS-type search algorithms We can do cheap cycle checks: as low as constant time In BFS-type search algorithms Cycle checking requires time linear in the length of the expanded path 6

7 If we only want one path to the solution Can prune path to a node n that has already been reached via a previous path o Subsumes cycle check Must make sure that we are not pruning a shorter path to the node o Don’t need to do this with LCSF:  always finds the shortest path to any node of the frontier first o We saw that A* usually does not  However, there are special conditions on the h(n) that can recover the guarantee of LCFS for A*: the monotone restriction (See P&M text, Section 3.7.2) Multiple Path Pruning 7

8 Which of the following is TRUE 8 A Multiple-path pruning increases the space use of breadth-first search from linear to exponential B Multiple-path pruning increases the space use of A* from search from linear to exponential C Multiple-path pruning increases the space use of depth-first search from linear to exponential D Multiple-path pruning doesn't increase the space of any of these methods

9 Which of the following is TRUE 9 A Multiple-path pruning increases the space use of breadth-first search from linear to exponential B Multiple-path pruning increases the space use of A* from search from linear to exponential C Multiple-path pruning increases the space use of depth-first search from linear to exponential D Multiple-path pruning doesn't increase the space of any of these methods

10 Branch-and-Bound Search One way to combine DFS with heuristic guidance Follows exactly the same search path as depth-first search But to ensure optimality, it does not stop at the first solution found It continues, after recording upper bound on solution cost upper bound: UB = cost of the best solution found so far Initialized to  or any overestimate of optimal solution cost When a path p is selected for expansion: Compute lower bound LB(p) = f(p) - If LB(p)  UB, remove p from frontier without expanding it (pruning) - Else expand p, adding all of its neighbors to the frontier 10

11 CompleteOptimalTimeSpace DFSNNO(b m )O(mb) BFSYYO(b m ) IDSYYO(b m )O(mb) LCFS (when arc costs available) Y Costs > 0 Y Costs >=0 O(b m ) Best First (when h available) NNO(b m ) A* (when arc costs > 0 and h admissible ) YY O(b m ) Optimally Efficient O(b m ) Branch-and-BoundN (Y with finite initial bound) Y If h admissible O(b m ) Search Methods so Far uninformed Uninformed but using arc costInformed (goal directed) 11

12 Lecture Overview Recap of Lecture 9 Dynamic programming for search Other advanced search algorithms Intro to CSP (time permitting) 12

13 Dynamic Programming Idea: for statically stored graphs, build a table of dist(n): The actual distance of the shortest path from node n to a goal g This is the perfect …………. dist(g) = dist(z) = dist(c) = dist(b) = kc b m g z 2 3 1 2 4 1 13

14 Dynamic Programming Idea: for statically stored graphs, build a table of dist(n): The actual distance of the shortest path from node n to a goal g This is the perfect …………. dist(g) = 0 dist(z) = 1 dist(c) = 3 dist(b) = 4 dist(k) = ? kc b m g z 2 3 1 2 4 1 B. 7 C.  A. 6 14

15 Dynamic Programming Idea: for statically stored graphs, build a table of dist(n): The actual distance of the shortest path from node n to a goal g This is the perfect h(h) dist(g) = 0 dist(z) = 1 dist(c) = 3 dist(b) = 4 dist(k) = 6 dist(m) = ? kc b m g z 2 3 1 2 4 1 B. 7 C.  A. 6 15

16 Dynamic Programming Idea: for statically stored graphs, build a table of dist(n): The actual distance of the shortest path from node n to a goal g This is the perfect h(h) dist(g) = 0 dist(z) = 1 dist(c) = 3 dist(b) = 4 dist(k) = 6 dist(m) = ? kc b m g z 2 3 1 2 4 1 B. 7 C.  A. 6 16

17 Dynamic Programming Dist(n) can be built backwards from the goal: over all the neighbors m of n Slide 17

18 Dynamic Programming Dist(n) can be built backwards from the goal: over all the neighbors m of n Requires explicit goal nodes (the previous methods only assumed a function that recognizes goal nodes); 2 4 8 8 Dist(KD) = min [(4+8), (3+8)] = 11 Slide 18

19 Dynamic Programming Idea: for statically stored graphs, build a table of dist(n): The actual distance of the shortest path from node n to a goal g This is the perfect h How could we implement that? Run LCFS with multiple path pruning in the backwards graph (arcs reversed), starting from the goal Can do this in Aispace by using invert graph, in create mode kc b h g z 2 3 1 2 4 1 19

20 LCSF on Inverted Graph (sample steps) 20

21 Dynamic Programming Idea: for statically stored graphs, build a table of dist(n): The actual distance of the shortest path from node n to a goal g This is the perfect h How could we implement that? Run LCFS with multiple path pruning in the backwards graph (arcs reversed), starting from the goal Can do this in Aispace by using invert graph, in create mode When it’s time to act (forward): for each node n always pick neighbor m that minimizes Problems? kc b h g z 2 3 1 2 4 1 21

22 Dynamic Programming Idea: for statically stored graphs, build a table of dist(n): The actual distance of the shortest path from node n to a goal g This is the perfect h How could we implement that? Run LCFS with multiple path pruning in the backwards graph (arcs reversed), starting from the goal Can do this in Aispace by using invert graph, in create mode When it’s time to act (forward): for each node n always pick neighbor m that minimizes Problems? Needs space to explicitly store the full search graph The dist function needs to be recomputed for each goal: method not suitable when goals vary often kc b h g z 2 3 1 2 4 1 22

23 Lecture Overview Recap of Lecture 9 Dynamic programming for search Other advanced search algorithms Intro to CSP (time permitting) 23

24 Iterative Deepening A* (IDA*) Branch & Bound (B&B) can still get stuck in infinite (or extremely long) paths Search depth-first, but to a fixed depth, as we did for Iterative Deepening Iterative Deepening 24

25 depth = 1 depth = 2 depth = 3... Iterative Deepening DFS (IDS) in a Nutshell Use DFS to look for solutions at depth 1, then 2, then 3, etc – For depth D, ignore any paths with longer length – Depth-bounded depth-first search If no goal re-start from scratch and get to depth 2 If no goal re-start from scratch and get to depth 3 If no goal re-start from scratch and get to depth 4 25

26 26 Which of the following is false: A. Iterative deepening saves space over breadth-first search B. Iterative deepening finds same answer as breadth-first search C. Iterative deepening runs faster than breadth-first search D. Iterative deepening recomputes elements of the frontier that breadth-first search stores

27 27 Which of the following is false: A. Iterative deepening saves space over breadth-first search B. Iterative deepening finds same answer as breadth-first search C. Iterative deepening runs faster than breadth-first search D. Iterative deepening recomputes elements of the frontier that breadth-first search stores

28 Iterative Deepening A* (IDA*) Like Iterative Deepening DFS ̶ But the “depth” bound is measured in terms of the ……….. ̶ IDA* is a bit of a misnomer o The only thing it has in common with A* is that it uses the f value f(p) = cost(p) + h(p) o It does NOT expand the path with lowest f value. It is doing DFS! o But f-value-bounded DFS doesn’t sound as good... Start with f-value = f(s) (s is start node) If you don’ t find a solution at a given f-value ̶ Increase the bound: to the minimum of the f-values that exceeded the previous bound Will explore all nodes n with f value ≤ f min (optimal one) Under the same conditions for the optimality of A* 28

29 Analysis of Iterative Deepening A* (IDA*) Complete and optimal? Yes, under the same conditions as A* Time complexity: O(b m ) Same as DFS, even though we visit paths multiple times (go back to slides on uninformed ID) Space complexity: O(bm) Same as DFS and IDS 29

30 CompleteOptimalTimeSpace DFSNNO(b m )O(mb) BFSYYO(b m ) IDSYYO(b m )O(mb) LCFS (when arc costs available) Y Costs > 0 Y Costs >=0 O(b m ) Best First (when h available) NNO(b m ) A* (when arc costs > ɜ and h admissible ) YY O(b m ) Optimally Efficient O(b m ) Branch-and-BoundN (Y with finite initial bound) Y If h admissible O(b m ) IDA* (when arc costs > 0 and h admissible ) YYO(b m ) Search Methods so Far uninformed Uninformed but using arc costInformed (goal directed) 30

31 Heuristic DFS Other than IDA*, how else can we use heuristic information in DFS? 31

32 Heuristic DFS Other than IDA*, how else we use heuristic information in DFS? When we expand a node, we put all its neighbours on the frontier In which order? Matters because DFS uses a LIFO stack Can use heuristic guidance: h or f Perfect heuristic f: would solve problem without any backtracking Heuristic DFS is very frequently used in practice Simply choose promising branches first Based on any kind of information available 32

33 Heuristic DFS Other than IDA*, how else we use heuristic information in DFS? When we expand a node, we put all its neighbours on the frontier In which order? Matters because DFS uses a LIFO stack Can use heuristic guidance: h or f Perfect heuristic f: would solve problem without any backtracking Heuristic DFS is very frequently used in practice Simply choose promising branches first Based on any kind of information available Does it have to be admissible? A. YesB. NoC. It depends 33

34 Heuristic DFS Other than IDA*, how else we use heuristic information in DFS? When we expand a node, we put all its neighbours on the frontier In which order? Matters because DFS uses a LIFO stack Can use heuristic guidance: h or f Perfect heuristic f: would solve problem without any backtracking Heuristic DFS is very frequently used in practice Simply choose promising branches first Based on any kind of information available Does heuristic have to be admissible? B. No We are still doing DFS, i.e. following each path all the way to end before trying any other. 34

35 Heuristic DFS and More Can we combine heuristic DFS with IDA* ? 35

36 Heuristic DFS and More Can we combine this with IDA* ? DFS with an f-value bound (using admissible heuristic h), putting neighbors onto frontier in a smart order (using some heuristic h’) Can, of course, also choose h’ = h Yes 36

37 Iterative deepening A* and B & B use little memory What if we have some more memory (but not enough for regular A*)? Do A* and keep as much of the frontier in memory as possible When running out of memory delete worst paths (highest ) from frontier (e.g. p 1,..p n below) Backup the value of the deleted paths to a common ancestor (e.g. N below) –Way to remember the potential value of the “forgotten” paths Memory-bounded A * N p pnpn p1p1 The corresponding subtrees get regenerated only when all other paths have been shown to be worse than the “forgotten” path 37

38 MBA*: Compute New h(p) p pnpn p1p1 If we want to prune subpaths p 1, p 2,.., p n below and “back up” their value to common ancestor p (cost(p i ) – cost(p)) + h(p i ) gives the estimated cost of the pruned subpath from p to p i Min [(cost(p i ) – cost(p)) + h(p i )] gives the pruned subpath with the most promising estimated cost Taking the max with Old h(p) gives the tighter h value for p 38 Min [(cost(p i ) – cost(p)) + h(p i )],Old h(p)

39 Memory-bounded A * Details of the algorithm are beyond the scope of this course but It is complete, if there is any reachable solution, i.e. a solution at a depth manageable by the available memory it is optimal if the optimal solution is reachable o Otherwise it returns the best reachable solution given the available memory Often used in practice: considered one of the best algorithms for finding optimal solutions under memory limitations It can be bogged down by having to switch back and forth among a set of candidate solution paths, of which only a few fit in memory 39

40 SelectionCompleteOptimalTimeSpace DFS BFS IDS LCFS Best First A* B&B IDA* MBA* Recap (Must Know How to Fill This 40

41 SelectionCompleteOptimalTimeSpace DFSLIFONNO(b m )O(mb) BFSFIFOYYO(b m ) IDSLIFOYYO(b m )O(mb) LCFSmin costY ** O(b m ) Best First min hNNO(b m ) A*min fY** O(b m ) B&BLIFO + pruningY** O(b m )O(mb) IDA*LIFOY** O(b m )O(mb) MBA*min fY** O(b m ) ** Needs conditions: you need to know what they are Recap (Must Know How to Fill This 41

42 SelectionCompleteOptimalTimeSpace DFSLIFONNO(b m )O(mb) BFSFIFOYYO(b m ) IDSLIFOYYO(b m )O(mb) LCFSmin costY ** O(b m ) Best First min hNNO(b m ) A*min fY** O(b m ) B&BLIFO + pruningY** O(b m )O(mb) IDA*LIFOYYO(b m )O(mb) MBA*min fY** O(b m ) Algorithms Often Used in Practice ** Needs conditions: you need to know what they are 42

43 Search in Practice Many paths to solution, no ∞ paths? Informed? Large branching factor? NO Y Y Y IDS B&B IDA* MBA* These are indeed general guidelines, specific problems might yield different choices

44 Remember Deep Blue? Deep Blue’s Results in the second tournament: second tournament: won 3 games, lost 2, tied 1 30 CPUs + 480 chess processors Searched 126.000.000 nodes per sec Generated 30 billion positions per move reaching depth 14 routinely Iterative Deepening with evaluation function (similar to a heuristic) based on 8000 features (e.g., sum of worth of pieces: pawn 1, rook 5, queen 10) 44

45 Sample applications An Efficient A* Search Algorithm For Statistical Machine Translation. 2001 The Generalized A* Architecture. Journal of Artificial Intelligence Research (2007) Machine Vision … Here we consider a new compositional model for finding salient curves. Factored A*search for models over sequences and trees International Conference on AI. 2003 It starts by saying… The primary challenge when using A* search is to find heuristic functions that simultaneously are admissible, close to actual completion costs, and efficient to calculate… applied to NLP and BioInformatics Recursive Best-First Search with Bounded Overhead (AAAI 2015) “We show empirically that this improves performance in several domains, both for optimal and suboptimal search, and also yields a better linear-space anytime heuristic search. RBFSCR is the first linear space best-first search robust enough to solve a variety of domains with varying operator costs.”

46 Learning Goals for search Identify real world examples that make use of deterministic, goal-driven search agents Assess the size of the search space of a given search problem. Implement the generic solution to a search problem. Apply basic properties of search algorithms: -completeness, optimality, time and space complexity of search algorithms. Select the most appropriate search algorithms for specific problems. Define/read/write/trace/debug different the search algorithms covered Implement cycle checking and multiple path pruning for different algorithms Identify when they are appropriate Construct heuristic functions for specific search problems Formally prove A* optimality. Understand general ideas behind Dynamic Programming and MBA* 46

47 Course Overview Environment Problem Type Query Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Logics STRIPS Vars + Constraints Value Iteration Variable Elimination Belief Nets Decision Nets Markov Processes Static Sequential Representation Reasoning Technique Variable Elimination First Part of the Course 47

48 Standard vs Specialized Search First studied general state space search in isolation Standard search problem: search in a state space State is a “black box” - any arbitrary data structure that supports three problem-specific routines: goal test: goal(state) finding successor nodes: neighbors(state) if applicable, heuristic evaluation function: h(state) We’ll see more specialized versions of search for various problems 48

49 Course Overview Environment Problem Type Query Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Logics STRIPS Vars + Constraints Value Iteration Variable Elimination Belief Nets Decision Nets Markov Processes Static Sequential Representation Reasoning Technique Variable Elimination 49

50 Constraint Satisfaction Problems (CPS): State Successor function Goal test Solution Heuristic function Query : State Successor function Goal test Solution Heuristic function Planning State Successor function Goal test Solution Heuristic function We will look at Search in Specific R&R Systems 50

51 Course Overview Environment Problem Type Query Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Logics STRIPS Vars + Constraints Value Iteration Variable Elimination Belief Nets Decision Nets Markov Processes Static Sequential Representation Reasoning Technique Variable Elimination We’ll start from CPS 51

52 Constraint Satisfaction Problems (CPS): State Successor function Goal test Solution Heuristic function Query : State Successor function Goal test Solution Heuristic function Planning State Successor function Goal test Solution Heuristic function We will look at Search for CSP 52

53 Lecture Overview Recap of previous lecture Dynamic programming for search Other advanced search algorithms Intro to CSP (time permitting) 53

54 CSPs: Crossword Puzzles - Proverb Source: Michael Littman 54

55 Constraint Satisfaction Problems (CSP) In a CSP –state is defined by a set of variables V i with values from domain D i –goal test is a set of constraints specifying 1.allowable combinations of values for subsets of variables (hard constraints) 2.preferences over values of variables (soft constraints) 55

56 Dimensions of Representational Complexity (from lecture 2) Reasoning tasks (Constraint Satisfaction / Logic&Probabilistic Inference / Planning) Deterministic versus stochastic domains Some other important dimensions of complexity: Explicit state or features or relations Flat or hierarchical representation Knowledge given versus knowledge learned from experience Goals versus complex preferences Single-agent vs. multi-agent Explicit state or features or relations

57 Explicit State vs. Features (Lecture 2) How do we model the environment? You can enumerate the possible states of the world A state can be described in terms of features Assignment to (one or more) features Often the more natural description 30 binary features can represent 2 30 =1,073,741,824 states 57

58 Variables/Features and Possible Worlds Variable: a synonym for feature We denote variables using capital letters Each variable V has a domain dom(V) of possible values Variables can be of several main kinds: Boolean: |dom(V)| = 2 Finite: |dom(V)| is finite Infinite but discrete: the domain is countably infinite Continuous: e.g., real numbers between 0 and 1 Possible world: Complete assignment of values to each variable This is equivalent to a state as we have defined it so far Soon, however, we will give a broader definition of state, so it is best to start distinguishing the two concepts. 58

59 Example (lecture 2) Mars Explorer Example Weather Temperature Longitude Latitude One possible world (state) Number of possible (mutually exclusive) worlds (states) {S, -30, 320, 210} 2 x 81 x 360 x 180 {S, C} [-40, 40] [0, 359] [0, 179] Product of cardinality of each domain … always exponential in the number of variables 59

60 How many possible worlds? Crossword Puzzle 1: variables are words that have to be filled in domains are English words of correct length possible worlds: all ways of assigning words Number of English words? Let’s say 150,000 Of the right length? Assume for simplicity: 15,000 for each length Number of words to be filled in? 63 How many possible worlds? (assume any combination is ok) 60

61 How many possible worlds? Crossword Puzzle 1: variables are words that have to be filled in domains are English words of correct length possible worlds: all ways of assigning words Number of English words? Let’s say 150,000 Of the right length? Assume for simplicity: 15,000 for each length Number of words to be filled in? 63 How many possible worlds? (assume any combination is ok) A. 15,000*63 C 63 15,000 B. 15,000 63 D. 1,563 63 61

62 How many possible worlds? Crossword Puzzle: variables are words that have to be filled in domains are English words of correct length possible worlds: all ways of assigning words Number of English words? Let’s say 150,000 Of the right length? Assume for simplicity: 15,000 for each word Number of words to be filled in? 63 How many possible worlds? (assume any combination is ok) 15000 63 62

63 How many possible worlds? Crossword 2: variables are cells (individual squares) domains are letters of the alphabet possible worlds: all ways of assigning letters to cells Number of empty cells? 15*15 – 32 = 193 Number of letters in the alphabet? 26 How many possible worlds? (assume any combination is ok) In general: (domain size) #variables 63

64 How many possible worlds? Crossword 2: variables are cells (individual squares) domains are letters of the alphabet possible worlds: all ways of assigning letters to cells Number of empty cells? 15*15 – 32 = 193 Number of letters in the alphabet? 26 How many possible worlds? (assume any combination is ok) In general: (domain size) #variables 26 193 64

65 Examples: variables, domains, possible worlds Scheduling Problem: variables are different tasks that need to be scheduled (e.g., course in a university; job in a machine shop) domains are the available times and resources needed for a task (e.g., time/room for course; time/machine for job) possible worlds: time/resource assignments for each task 65

66 Constraint Satisfaction Problems (CSP) Allow for usage of useful general-purpose algorithms with more power than standard search algorithms They exploit the multi-dimensional nature of the problem and the structure provided by the goal set of constraints, *not* black box. 66

67 Lecture Overview Recap of Lecture 4 Dynamic programming for search Other advanced search algorithms Intro to CSP - Constraints and Model for a CSP 67

68 Constraints Constraints are restrictions on the values that one or more variables can take Unary constraint: restriction involving a single variable k-ary constraint: restriction involving k different variables We will mostly deal with binary constraints Constraints can be specified by 1.listing all combinations of valid domain values for the variables participating in the constraint 2.giving a function that returns true when given values for each variable which satisfy the constraint 68

69 Constraints: Simple Example Unary constraint: E.g.: V 2  2 k-ary constraint: E.g. binary (k=2): V 1 + V 2 < 5 E.g. 3-ary: V 1 + V 2 + V 4 < 5 We will mostly deal with binary constraints Constraints can be specified by 1.listing all combinations of valid domain values for the variables participating in the constraint –E.g. for constraint V 1 > V 2 and dom(V 1 ) = {1,2,3} and dom(V 2 ) = {1,2}: 2.giving a function (predicate) that returns true if given values for each variable which satisfy the constraint else false: V = {V 1,V 2 } – dom(V 1 ) = {1,2,3}; – dom(V 2 ) = {1,2} 69 V1V1 V2V2

70 Constraints: Simple Example V1V1 V2V2 21 31 32 Unary constraint: E.g.: V 2  2 k-ary constraint: E.g. binary (k=2): V 1 + V 2 < 5 E.g. 3-ary: V 1 + V 2 + V 4 < 5 We will mostly deal with binary constraints Constraints can be specified by 1.listing all combinations of valid domain values for the variables participating in the constraint –E.g. for constraint V 1 > V 2 and dom(V 1 ) = {1,2,3} and dom(V 2 ) = {1,2}: 2.giving a function (predicate) that returns true if given values for each variable which satisfy the constraint else false: V = {V 1,V 2 } – dom(V 1 ) = {1,2,3}; – dom(V 2 ) = {1,2} V 1 > V 2 70

71 Examples Crossword Puzzle 1: variables are words that have to be filled in domains are valid English words constraints: words have the same letters at points where they intersect Slide 71 h 1 [0] = v 1 [0] h 1 [1] = v 2 [0] ….. ~225 constraints v 1 [ ] h 1 []

72 Example: Map-Coloring Variables WA, NT, Q, NSW, V, SA, T Domains D i = {red,green,blue} Constraints: adjacent regions must have different colors e.g., WA ≠ NT, NT ≠ SA, NT ≠ QU, ….., Or WA NT RedGreen RedBue GreenRed GreenBlue Red BlueGreen NT SA RedGreen RedBue GreenRed GreenBlue Red BlueGreen NT QU RedGreen RedBue GreenRed GreenBlue Red BlueGreen ………….. 72

73 Eight Queen problem: place 8 queens on a chessboard so that no queen can attack the others Constraints: No queens can be in the same row, column or diagonals Example: Eight Queen problem 73

74 Example 2: 8 queens Variables: V 1,.. V n. V i = Row occupied by the i th queen in the i th column Domains: D Vi = {1,2,3,4,5,6,7,8} Slide 74

75 Example: Eight Queen problem 75 V 1 V 2 V 3 V 4 V 5 V 6 V 7 V 8 1234567812345678

76 Example 2: 8 queens Variables: V 1,.. V n. V i = Row occupied by the i th queen in the i th column Domains: D Vi = {1,2,3,4,5,6,7,8} Constraints: Two queens cannot be on the same row or on the same diagonal We can specify the constraints by enumerating explicitly, for each pair of columns, what positions are allowed. Ex: Constr(V 1, V 2 ) = Slide 76

77 Example 2: 8 queens Variables: V 1,.. V n. V i = Row occupied by the i th queen in the i th column Domains: D Vi = {1,2,3,4,5,6,7,8} Constraints: : Two queens cannot be on the same row or on the same diagonal We can specify the constraints by enumerating explicitly, for each pair of columns, what positions are allowed. Ex: Constr(V 1, V 2 ) = {(1,3), (1,4),..(1,8) (2,4)(2,5)… (8,1)…. (8,5), (8,6)} 77

78 Learning Goals for CSP so far Define possible worlds in term of variables and their domains Compute number of possible worlds on real examples Specify constraints to represent real world problems differentiating between: Unary and k-ary constraints List vs. function format Coming up: Arc consistency and domain splitting Read Sections 4.5-4.6 78


Download ppt "Computer Science CPSC 322 Lecture 7 Search Wrap Up, Intro to Constraint Satisfaction Problems 1."

Similar presentations


Ads by Google