Presentation is loading. Please wait.

Presentation is loading. Please wait.

Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.

Similar presentations


Presentation on theme: "Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004."— Presentation transcript:

1 Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004

2 CS 471/598 by H. Liu2 What we’ll learn Informed search algorithms are more efficient in most cases What are Informed search methods How to use problem-specific knowledge How to optimize a solution

3 CS 471/598 by H. Liu3 Best-First Search Evaluation function gives a measure which node to expand Minimizing estimated cost to reach a goal Greedy search at node n heuristic function h(n)  an example is straight-line distance (Fig 4.1) The simple Romania map Finding the route using greedy search – example (Fig 4.2)

4 CS 471/598 by H. Liu4 Best-first search (2) h(n) is independent of the path cost g(n) Minimizing the total path cost f(n) = g(n) + h(n) estimated cost of the cheapest solution thru n Admissible heuristic function h never overestimates the cost optimistic

5 CS 471/598 by H. Liu5 A* search How it works (Fig 4.3) Characteristics of A* Monotonicity - nondescreasing Tree-search to ensure monotonicity Contours (Fig 4.4) - from circle to oval (ellipse) Proof of the optimality of A* The completeness of A* (Fig 4.4 Contours) Complexity of A* (time and space) For most problems, the number of nodes within the goal contour search space is till exponential in the length of the solution

6 CS 471/598 by H. Liu6 Different Search Strategies Uniform-cost search minimize the path cost so far Greedy search minimize the estimated path cost A* minimize the total path cost Time and space issues of A*  Designing good heuristic functions  A* usually runs out of space long before it runs out of time

7 CS 471/598 by H. Liu7 Heuristic Functions An example (the 8-puzzle, Fig 4.7) How simple can a heuristic be?  The distance to its correct pisition  Using Manhattan distance What is a good heuristic? Effective branching factor - close to 1 (Why?) Value of h  not too large - must be admissible (Why?)  not too small - ineffective (oval to circle) (expanding all nodes with f (n) < f*) Goodness measure - no. of nodes expanded (Fig 4.8)

8 CS 471/598 by H. Liu8 Domination translates directly into efficiency Larger h means smaller branching factor If h2 >= h1, is h2 always better than h1?  Proof? (h1 <= h2 <= C* - g) Inventing heuristic functions Working on relaxed problems  remove some constraints

9 CS 471/598 by H. Liu9 8-puzzle revisited Definition: A tile can move from A to B if A is horizontally or vertically adjacent to B and B is blank Relaxation by removing one or both the conditions A tile can move from A to B if A ~ B A tile can move from A to B if B is blank A tile can move from A to B Deriving a heuristic from the solution cost of a subproblem Fig 4.9

10 CS 471/598 by H. Liu10 If we have admissible h 1 … h m and none dominates, we can have for node n h = max(h 1, …, h m ) Feature selection and combination use only relevant features  “number of misplaced tiles” as a feature The cost of heuristic function calculation <= the cost of expanding a node otherwise, we need to rethink. Learning heuristics from experience Each optimal solution to 8-puzzle provides a learning example

11 CS 471/598 by H. Liu11 Improving A* - memory-bounded heuristic search Iterative-deepening A* (IDA*) Using f-cost(g+h) rather than the depth Cutoff value is the smallest f-cost of any node that exceeded the cutoff on the previous iteration Space complexity O(bd) Recursive best-first search (RBFS) Best-first search using only linear space (Fig 4.5) It replaces the f-value of each node along the path with the best f- value of its children (Fig 4.6) Space complexity O(bd) Simplified memory bounded A* (SMA*) IDA* and RBFS use too little memory – excessive node regeneration Expanding the best leaf until memory is full Dropping the worst leaf node (highest f-value) by backing up to its parent

12 CS 471/598 by H. Liu12 Local Search Algorithms and Optimization Problems Global and local optima Fig 4.10, from current state to global maximum Hill-climbing (maximization) Well know drawbacks (Fig 4.13)  Local maxima, Plateaus, Ridges Random-restart Simulated annealing Gradient descent (minimization) Escaping the local minima by controlled bouncing Local beam search Keeping track of k states instead of just one Genetic algorithms

13 CS 471/598 by H. Liu13 Online Search Offline search – computing a complete solution before acting Online search – interleaving computation and action Solving an exploration problem where the states and actions are unknown to the agent Good for domains where there is a penalty for computing too long, or for stochastic domains

14 CS 471/598 by H. Liu14 Online search problems An agent know: e.g., Fig 4.18 Actions(s) in state s Step-cost function c(s,a,s’) Goal-Test(s) Others: with memory of states visited, and admissible heuristic from current state to the goal state Objective: Reaching a goal state while minimizing cost

15 CS 471/598 by H. Liu15 Measuring its performance Competitive ratio: the true path cost over the path cost if it knew the search space in advance The best achievable competitive ratio can be infinite If some actions are irreversible, it may reach a dead-end ( Fig 4.19 (a)) An adversary argument – Fig 4.19 (b) No bounded competitive ratio can be guaranteed if there are paths of unbounded cost

16 CS 471/598 by H. Liu16 Online search agents It can expand only a node that it physically occupies, so it should expand nodes in a local order Online Depth-First Search (Fig 4.20) Backtracking requires that actions are reversible Hill-climbing search keeps one current state in memory It can get stuck in a local minimum Random restart does not work here Random walk selects at random one of the available actions from the current state  It can be very slow, Fig 4.21 Augmenting hill climbing with memory rather than randomness is more effective  H(s) is updated as the agent gains experience, Fig 4.22

17 CS 471/598 by H. Liu17 Summary Heuristics are the key to reducing research costs f(n) = g(n)+h(n) A* is complete, optimal, and optimally efficient among all optimal search algorithms, but... Iterative improvement algorithms are memory efficient, but... Local search Online search is different from offline search


Download ppt "Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004."

Similar presentations


Ads by Google