Presentation is loading. Please wait.

Presentation is loading. Please wait.

MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams.

Similar presentations


Presentation on theme: "MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams."— Presentation transcript:

1 MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams

2 Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

3 Intro Example: 8-Queens Generate-and-test, 8 8 combinations

4 Intro Example: 8-Queens

5 Constraint Satisfaction Problem  Set of variables {X 1, X 2, …, X n }  Each variable X i has a domain D i of possible values Usually D i is discrete and finite  Set of constraints {C 1, C 2, …, C p } Each constraint C k involves a subset of variables and specifies the allowable combinations of values of these variables  Goal: Assign a value to every variable such that all constraints are satisfied

6 Example: 8-Queens Problem  8 variables X i, i = 1 to 8  Domain for each variable {1,2,…,8}  Constraints are of the forms: X i = k  X j  k for all j = 1 to 8, ji

7 Example: Map Coloring 7 variables {WA,NT,SA,Q,NSW,V,T} Each variable has the same domain {red, green, blue} No two adjacent variables have the same value: WA  NT, WA  SA, NT  SA, NT  Q, SA  Q, SA  NSW, SA  V,Q  NSW, NSW  V WA NT SA Q NSW V T WA NT SA Q NSW V T

8 Example: Task Scheduling T1 must be done during T3 T2 must be achieved before T1 starts T2 must overlap with T3 T4 must start after T1 is complete T1 T2 T3 T4

9 Constraint Graph Binary constraints T WA NT SA Q NSW V Two variables are adjacent or neighbors if they are connected by an edge or an arc T1 T2 T3 T4

10 CSP as a Search Problem  Initial state: empty assignment  Successor function: a value is assigned to any unassigned variable, which does not conflict with the currently assigned variables  Goal test: the assignment is complete  Path cost: irrelevant

11 Backtracking example

12

13

14

15 Backtracking Algorithm CSP-BACKTRACKING(PartialAssignment a) If a is complete then return a X  select an unassigned variable D  select an ordering for the domain of X For each value v in D do  If v is consistent with a then  Add (X= v) to a  result  CSP-BACKTRACKING(a)  If result  failure then return result  Remove (X= v) from a Return failure Start with CSP-BACKTRACKING({})

16 Improving backtracking efficiency Which variable should be assigned next? In what order should its values be tried? Can we detect obvious failure early?

17 Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

18 What is an Optimal CSP (OCSP)?  Set of decision variables y,  A utility function g on y,  A set of constraints C that y must satisfy.  The solution is an assignment to y, maximizing g and satisfying C.

19 What is an OCSP formally?  OCSP is:  Where CSP = x – set of variables D x – Domain over variables C x – set of constraints over variables  - set of decision variables  Cost function –  We call the elements of D y decision states.  A solution y* to an OCSP is a minimum cost decision state that is consistent with the CSP.

20 Methods for solving OCSP  Traditional method: A* Good at finding optimal solutions. Visits every state that whose estimated cost is less than the true optimum*. Computationally unacceptable for model-based executives. * A* heuristic is admissible, that is, it never overestimates cost.  Solution: CONFLICT-DIRECTED A* Also searches in best-first order Eliminates subspaces around inconsistent states

21 Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

22 CONFLICT-DIRECTED A* Williams&Rango 03 Step 1: Best candidate S generated Step 2: S is tested for consistency Step 3: If S is inconsistent, the inconsistency is generalized to conflicts, removing a subspace of the solution space Step 4: The program jumps to the next-best candidate, resolving all conflicts.

23 Enumerating Probable Candidates Generate Candidate Test Candidate Consistent? Keep Compute Posterior p Below Threshold? Extract Conflict Done YesNo YesNo Leading Candidate Based on Priors

24 Increasing Cost consistent inconsistent A* - must visit all states with cost lower than optimum

25 Increasing Cost Conflict-directed A* - found inconsistent state eliminates all states entail the same symptom consistent inconsistent

26 Increasing Cost Conflict 1 Conflict-directed A* consistent inconsistent

27 Increasing Cost Conflict 1 Conflict-directed A* consistent inconsistent

28 Increasing Cost Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent

29 Increasing Cost Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent

30 Increasing Cost Conflict 3 Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent

31 Increasing Cost Conflict 3 Conflict 2 Conflict 1 Conflict-directed A* Feasible regions described by the implicates of known conflicts (Kernel Assignments) Want kernel assignment containing the best cost candidate consistent inconsistent

32 Conflict-directed A* Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP]  {} OCSP  Initialize-Best-Kernels(OCSP) Solutions[OCSP]  {} loop do decision-state  Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else new-conflicts  Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP]  Eliminate-Redundant-Conflicts(Conflicts[OCSP]  new-conflicts) end

33 Conflict-directed A* Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP]  {} OCSP  Initialize-Best-Kernels(OCSP) Solutions[OCSP]  {} loop do decision-state  Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else new-conflicts  Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP]  Eliminate-Redundant-Conflicts(Conflicts[OCSP]  new-conflicts) end

34 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 Assume Independent failures:  P G(mi) >> P U(mi) //G-good, U-unknown  P single >> P double  P U(M2) > P U(M1) > P U(M3) > P U(A1) > P U(A2) Example: Diagnosis as an OCSP

35 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12  OBS = observation.  COMPONENTS = variable set y.  System Description = constraints - Cy. M1=G  X=A+B  Candidate = a mode assignments to y.  Diagnosis = a candidate that is consistent with Cy and OBS.  Utility = candidate probability P(y).  Cost = 1/P(y). Example: Diagnosis

36 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12  Conflicts / Constituent Diagnoses none  Best Kernel: {}  Best Candidate: ? First Iteration

37 { } M1=?  M2=?  M3=?  A1=?  A2=? M1=G  M2=G  M3=G  A1=G  A2=G  Select most likely value for unassigned modes

38  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12

39  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6

40  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6

41  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 6

42  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 6

43  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 6  Extract Conflict and Constituent Diagnoses:

44  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 6  Extract Conflict and Constituent Diagnoses:

45  Test: M1=G  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 6  Extract Conflict and Constituent Diagnoses: ¬ [M1=G  M2=G  A1=G]

46  Test: M1=G  M2=G  M3=G  A1=G  A2=G  Extract Conflict and Constituent Diagnoses: ¬ [M1=G  M2=G  A1=G] M1=U  M2=U  A1=U 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 6 This process has been done through propositional satisfiability algorithm DPLL algorithm*

47  Conflicts / Constituent Diagnoses M1=U  M2=U  A1=U  Best Kernel: M2=U  Best Candidate: M1=G  M2=U  M3=G  A1=G  A2=G Second Iteration 3 2 2 3 3 10 M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 6 Later we will describe how to determine the best Kernel

48  Test: M1=G  M2=U  M3=G  A1=G  A2=G 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 A1

49 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

50 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6 6  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

51 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6 4 6  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

52 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6 4 6 10  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

53 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6 4 6 10  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1  Extract Conflict:

54 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6 4 6 10  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1  Extract Conflict: Not [G(M1) & G(M3) & G(A1) & G(A2)]

55 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6 4 6 10  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1  Extract Conflict: ¬ [M1=G  M3=G  A1=G  A2=G] M1=U  M3=U  A1=U  A2=U

56  Conflicts / Constituent Diagnoses M1=U  M2=U  A1=U M1=U  M3=U  A1=U  A2=U  Best Kernel: M1=U  Best Candidate: M1=U  M2=G  M3=G  A1=G  A2=G Third Iteration 3 2 2 3 3 10 M1 M3 A2 A B C D E F G X Y Z 12 6 4 6 10 A1

57  Test: M1=U  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M2 M3 A1 A2 A B C D E F G X Y Z 12

58 3 2 2 3 3 10 M2 M3 A1 A2 A B C D E F G X Y Z 12  Test: M1=U  M2=G  M3=G  A1=G  A2=G

59 3 2 2 3 3 10 M2 M3 A1 A2 A B C D E F G X Y Z 12 6  Test: M1=U  M2=G  M3=G  A1=G  A2=G

60 3 2 2 3 3 10 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6  Test: M1=U  M2=G  M3=G  A1=G  A2=G

61 3 2 2 3 3 10 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 4

62  Test: M1=U  M2=G  M3=G  A1=G  A2=G 3 2 2 3 3 10 M2 M3 A1 A2 A B C D E F G X Y Z 12 6 6 4 Consistent!

63 A2=UM1=U M3=UA1=U A1=U, A2=U, M1=U, M3=U A1=UM1=UM2=U A1=U M1=U M1=U  A2=U M2=U  M3=U Conflict-Directed A*: Generating Best Kernels A1=U, M1=U, M2=U Constituent Diagnoses Minimal set covering is an instance of breadth first search. To find the best kernel, expand tree in best first order. Insight: Kernels found by minimal set covering Superset of M1

64 A1=U, A2=U, M1=U, M3=U A1=UM1=UM2=U A2=UM1=U M3=UA1=U M2=U  A2=U M2=U  M3=U M1=U Conflict-Directed A*: Generating Best Kernels A1=U, M1=U, M2=U Constituent Diagnoses A1=U Minimal set covering is an instance of breadth first search. To find the best kernel, expand tree in best first order. Insight: Kernels found by minimal set covering Continue expansion to find best candidate. P U (M2) > P U (M1)

65 Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

66 OCSP Admissible Estimate:  M1=?  M3=?  A1=?  A2=? x P M1=G x P M3=G x P A1=G x P A2=G Select most likely value for unassigned modes F = G + H (H is admissible) P M2=u M2=U MPI: mutual, preferential independence To find the best candidate we assign each variable its best utility value, independent of the values assigned to the other variables.

67 M2=UA1=UM1=U M2=U  M1=U  A1=U { } For any Node N:  The child of N containing the best cost candidate is the child with the best estimated cost f = g+h (by MPI).  Only need to expand the best child. Conflict-directed A*: Expand Best Child & Sibling Constituent kernels

68 M2=UA1=UM1=U Order constituents by decreasing likelihood { } >> Conflict-directed A*: Expand Best Child & Sibling For any Node N:  The child of N containing the best cost candidate is the child with the best estimated cost f = g+h.  Only need to expand the best child. M2=U  M1=U  A1=U Constituent kernels

69 M2=U Order constituents by decreasing likelihood { } Conflict-directed A*: Expand Best Child & Sibling For any Node N:  The child of N containing the best cost candidate is the child with the best estimated cost f = g+h.  Only need to expand the best child. M2=U  M1=U  A1=U Constituent kernels

70 M1=U  M3=U  A1=U  A2=U M1=U M2=U M1=U Conflict-directed A*: Expand Best Child & Sibling M2=U  M1=U  A1=U Constituent kernels When a best child loses any candidate, expand child’s next best sibling: If unresolved conflicts: expand as soon as next conflict of child is resolved. If conflicts resolved: expand as soon as child is expanded to a full candidate. { } M2=GM3=GA1=GA2=G M2=UM3=UA1=U A2=U

71 Appendix: A* Search: Preliminaries Problem: State Space Search Problem  Initial State  Expand(node) Children of Search Node = next states  Goal-Test(node) True if search node at a goal-state hAdmissible Heuristic -Optimistic cost to go Search Node:Node in the search tree  StateState the search is at  ParentParent in search tree

72 A* Search: Preliminaries Problem: State Space Search Problem  Initial State  Expand(node) Children of Search Node = adjacent states  Goal-Test(node) True if search node at a goal-state  NodesSearch Nodes to be expanded  ExpandedSearch Nodes already expanded  InitializeSearch starts at , with no expanded nodes hAdmissible Heuristic -Optimistic cost to go Search Node:Node in the search tree  StateState the search is at  ParentParent in search tree Nodes[Problem]:  Remove-Best(f)Removes best cost node according to f  Enqueue(new-node, f )Adds search node to those to be expanded

73 A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x)  g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node  Remove-Best(Nodes[problem], f) state  State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem]  Expanded[problem]  {state} new-nodes  Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem]  Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Expand best first

74 A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x)  g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node  Remove-Best(Nodes[problem], f) state  State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem]  Expanded[problem]  {state} new-nodes  Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem]  Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Terminates when...

75 A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x)  g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node  Remove-Best(Nodes[problem], f) state  State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem]  Expanded[problem]  {state} new-nodes  Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem]  Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Dynamic Programming Principle...

76 Next Best State Resolving Conflicts Function Next-Best-State-Resolving-Conflicts (OCSP) returns the best cost state consistent with Conflicts[OCSP]. f(x)  G[problem] (g[problem](x), h(x)) loop do if Nodes[OCSP] is empty then return failure node  Remove-Best(Nodes[OCSP], f) state  State[node] add state to Visited[OCSP] new-nodes  Expand-State-Resolving-Conflicts(node, OCSP) for each new-node in new-nodes unless Exists n in Nodes[OCSP] such that State[new-node] = state OR State[new-node] is in Visited[problem] then Nodes[OCSP]  Enqueue(Nodes[OCSP], new-node, f ) if Goal-Test-State-Resolving-Conflicts[OCSP] applied to state succeeds then return node end An instance of A*


Download ppt "MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams."

Similar presentations


Ads by Google