Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Midterm Review cmsc421 Fall 2005. 2 Outline Review the material covered by the midterm Questions?

Similar presentations


Presentation on theme: "1 Midterm Review cmsc421 Fall 2005. 2 Outline Review the material covered by the midterm Questions?"— Presentation transcript:

1 1 Midterm Review cmsc421 Fall 2005

2 2 Outline Review the material covered by the midterm Questions?

3 3 Subjects covered so far… Search: Blind & Heuristic Constraint Satisfaction Adversarial Search Logic: Propositional and FOL

4 4 … and subjects to be covered planning uncertainty learning and a few more…

5 5 Search

6 6 Stating a Problem as a Search Problem  State space S  Successor function: x  S  SUCCESSORS (x)  2 S  Arc cost  Initial state s 0  Goal test: x  S  GOAL? (x) =T or F  A solution is a path joining the initial to a goal node S 1 3 2

7 7 Basic Search Concepts  Search tree  Search node  Node expansion  Fringe of search tree  Search strategy: At each stage it determines which node to expand

8 8 Search Algorithm 1.If GOAL?(initial-state) then return initial-state 2.INSERT(initial-node,FRINGE) 3.Repeat: a.If empty(FRINGE) then return failure b.n  REMOVE(FRINGE) c.s  STATE(n) d.For every state s’ in SUCCESSORS(s) i.Create a new node n’ as a child of n ii.If GOAL?(s’) then return path or goal state iii.INSERT(n’,FRINGE)

9 9 Performance Measures  Completeness A search algorithm is complete if it finds a solution whenever one exists [What about the case when no solution exists?]  Optimality A search algorithm is optimal if it returns an optimal solution whenever a solution exists  Complexity It measures the time and amount of memory required by the algorithm

10 10 Blind vs. Heuristic Strategies  Blind (or un-informed) strategies do not exploit state descriptions to select which node to expand next  Heuristic (or informed) strategies exploits state descriptions to select the “most promising” node to expand

11 11 Blind Strategies  Breadth-first Bidirectional  Depth-first Depth-limited Iterative deepening  Uniform-Cost (variant of breadth-first) Arc cost = c(action)    0

12 12 Comparison of Strategies  Breadth-first is complete and optimal, but has high space complexity  Depth-first is space efficient, but is neither complete, nor optimal  Iterative deepening is complete and optimal, with the same space complexity as depth-first and almost the same time complexity as breadth-first

13 13 Avoiding Revisited States  Requires comparing state descriptions  Breadth-first search: Store all states associated with generated nodes in CLOSED If the state of a new node is in CLOSED, then discard the node

14 14 Avoiding Revisited States  Depth-first search: Solution 1: –Store all states associated with nodes in current path in CLOSED –If the state of a new node is in CLOSED, then discard the node  Only avoids loops Solution 2: –Store of all generated states in CLOSED –If the state of a new node is in CLOSED, then discard the node  Same space complexity as breadth-first !

15 15 Uniform-Cost Search (Optimal)  Each arc has some cost c   > 0  The cost of the path to each fringe node N is g(N) =  costs of arcs  The goal is to generate a solution path of minimal cost  The queue FRINGE is sorted in increasing cost  Need to modify search algorithm S 0 1 A 5 B 15 C SG A B C 5 1 10 5 5 G 11 G 10

16 16 Modified Search Algorithm 1.INSERT(initial-node,FRINGE) 2.Repeat: a.If empty(FRINGE) then return failure b.n  REMOVE(FRINGE) c.s  STATE(n) d.If GOAL?(s) then return path or goal state e.For every state s’ in SUCCESSORS(s) i.Create a node n’ as a successor of n ii.INSERT(n’,FRINGE)

17 17 Avoiding Revisited States in Uniform-Cost Search  When a node N is expanded the path to N is also the best path from the initial state to STATE(N) if it is the first time STATE(N) is encountered.  So: When a node is expanded, store its state into CLOSED When a new node N is generated: –If STATE(N) is in CLOSED, discard N –If there exits a node N’ in the fringe such that STATE(N’) = STATE(N), discard the node – N or N’ – with the highest-cost path

18 18 Best-First Search  It exploits state description to estimate how promising each search node is  An evaluation function f maps each search node N to positive real number f(N)  Traditionally, the smaller f(N), the more promising N  Best-first search sorts the fringe in increasing f

19 19  The heuristic function h(N) estimates the distance of STATE (N) to a goal state Its value is independent of the current search tree; it depends only on STATE(N) and the goal test  Example:  h 1 (N) = number of misplaced tiles = 6 Heuristic Function 14 7 5 2 63 8 STATE (N) 64 7 1 5 2 8 3 Goal state

20 20 Classical Evaluation Functions  h(N): heuristic function [Independent of search tree]  g(N): cost of the best path found so far between the initial node and N [Dependent on search tree]  f(N) = h(N)  greedy best-first search  f(N) = g(N) + h(N)

21 21 Can we Prove Anything?  If the state space is finite and we discard nodes that revisit states, the search is complete, but in general is not optimal  If the state space is finite and we do not discard nodes that revisit states, in general the search is not complete  If the state space is infinite, in general the search is not complete

22 22 Admissible Heuristic  Let h*(N) be the cost of the optimal path from N to a goal node  The heuristic function h(N) is admissible if: 0  h(N)  h*(N)  An admissible heuristic function is always optimistic ! Note: G is a goal node  h(G) = 0

23 23 A* Search (most popular algorithm in AI)  f(N) = g(N) + h(N), where: g(N) = cost of best path found so far to N h(N) = admissible heuristic function  for all arcs: 0 <   c (N,N’)  “modified” search algorithm is used  Best-first search is then called A* search

24 24 Result #1 A* is complete and optimal

25 25 Experimental Results  8-puzzle with:  h 1 = number of misplaced tiles  h 2 = sum of distances of tiles to their goal positions  Random generation of many problem instances  Average effective branching factors (number of expanded nodes): dIDSA1*A1*A2*A2* 22.451.79 62.731.341.30 122.78 (3,644,035)1.42 (227)1.24 (73) 16--1.451.25 20--1.471.27 24--1.48 (39,135)1.26 (1,641)

26 26 Iterative Deepening A* (IDA*)  Idea: Reduce memory requirement of A* by applying cutoff on values of f  Consistent heuristic h  Algorithm IDA*: 1.Initialize cutoff to f(initial-node) 2.Repeat: a.Perform depth-first search by expanding all nodes N such that f(N)  cutoff b.Reset cutoff to smallest value f of non- expanded (leaf) nodes

27 27 Local Search  Light-memory search method  No search tree; only the current state is represented!  Only applicable to problems where the path is irrelevant (e.g., 8-queen), unless the path is encoded in the state  Many similarities with optimization techniques

28 28 Search problems Blind search Heuristic search: best-first and A* Construction of heuristicsLocal search Variants of A*

29 29 When to Use Search Techniques? 1)The search space is small, and No other technique is available Developing a more efficient technique is not worth the effort 2)The search space is large, and No other technique is available, and There exist “good” heuristics

30 30 Constraint Satisfaction

31 31 Constraint Satisfaction Problem Set of variables {X 1, X 2, …, X n } Each variable X i has a domain D i of possible values Usually D i is discrete and finite Set of constraints {C 1, C 2, …, C p } Each constraint C k involves a subset of variables and specifies the allowable combinations of values of these variables Goal: Assign a value to every variable such that all constraints are satisfied

32 32 CSP as a Search Problem Initial state: empty assignment Successor function: a value is assigned to any unassigned variable, which does not conflict with the currently assigned variables Goal test: the assignment is complete Path cost: irrelevant

33 33 Questions 1.Which variable X should be assigned a value next? 1.Minimum Remaining Values/Most-constrained variable 2.In which order should its domain D be sorted? 1.least constrained value 3.How should constraints be propagated? 1.forward checking 2.arc consistency

34 34 Adversarial Search

35 35 Specific Setting Two-player, turn-taking, deterministic, fully observable, zero-sum, time-constrained game  State space  Initial state  Successor function: it tells which actions can be executed in each state and gives the successor state for each action  MAX’s and MIN’s actions alternate, with MAX playing first in the initial state  Terminal test: it tells if a state is terminal and, if yes, if it’s a win or a loss for MAX, or a draw  All states are fully observable

36 36 Choosing an Action: Basic Idea 1)Using the current state as the initial state, build the game tree uniformly to the maximal depth h (called horizon) feasible within the time limit 2)Evaluate the states of the leaf nodes 3)Back up the results from the leaves to the root and pick the best action assuming the worst from MIN  Minimax algorithm

37 37 Minimax Algorithm 1.Expand the game tree uniformly from the current state (where it is MAX’s turn to play) to depth h 2.Compute the evaluation function at every leaf of the tree 3.Back-up the values from the leaves to the root of the tree as follows: a.A MAX node gets the maximum of the evaluation of its successors b.A MIN node gets the minimum of the evaluation of its successors 4.Select the move toward a MIN node that has the largest backed-up value

38 38 Alpha-Beta Pruning  Explore the game tree to depth h in depth-first manner  Back up alpha and beta values whenever possible  Prune branches that can’t lead to changing the final decision

39 39 Example The beta value of a MIN node is an upper bound on the final backed-up value. It can never increase 1  = 1 2

40 40 Example  = 1 The alpha value of a MAX node is a lower bound on the final backed-up value. It can never decrease 1  = 1 2

41 41 Alpha-Beta Algorithm  Update the alpha/beta value of the parent of a node N when the search below N has been completed or discontinued  Discontinue the search below a MAX node N if its alpha value is  the beta value of a MIN ancestor of N  Discontinue the search below a MIN node N if its beta value is  the alpha value of a MAX ancestor of N

42 42 Logical Representations and Theorem Proving

43 43 Logical Representations Propositional logic First-order logic syntax and semantics models, entailment, etc.

44 44 The Game Rules: 1.Red goes first 2.On their turn, a player must move their piece 3.They must move to a neighboring square, or if their opponent is adjacent to them, with a blank on the far side, they can hop over them 4.The player that makes it to the far side first wins.

45 45 Logical Inference Propositional: truth tables or resolution FOL: resolution + unification strategies: –shortest clause first –set of support

46 46 Questions?


Download ppt "1 Midterm Review cmsc421 Fall 2005. 2 Outline Review the material covered by the midterm Questions?"

Similar presentations


Ads by Google