CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Lights Out Issues Questions? Comment from me.
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Notes Dijstra’s Algorithm Corrected syllabus.
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Informed search.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
CSC344: AI for Games Lecture 4: Informed search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Reading Material Sections 3.3 – 3.5 Sections 4.1 – 4.2 “Optimal Rectangle Packing: New Results” By R. Korf (optional) “Optimal Rectangle Packing: A Meta-CSP.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Artificial Intelligence
Solving Problems by Searching
Informed Search.
Presentation transcript:

CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR

Outline  Evaluation of Search Strategies  Informed Search Strategies  Best-first search  Greedy best-first search  A * search  Admissible Heuristics  Memory-Bounded Search  Iterative-Deepening A* Search  Recursive Best-First Search  Simplified Memory-Bounded A* Search  Summary

Evaluation of Search Strategies  Search algorithms are commonly evaluated according to the following four criteria;  Completeness: does it always find a solution if one exists?  Time complexity: how long does it take as a function of number of nodes?  Space complexity: how much memory does it require?  Optimality: does it guarantee the least-cost solution?  Time and space complexity are measured in terms of:  b – max branching factor of the search tree  d – depth of the least-cost solution  m – max depth of the search tree (may be infinity)

Before Starting… olution  Solution is a sequence of operators that bring you from current state to the goal state. search strategy  The search strategy is determined by the order in which the nodes are expanded.  Uninformed Search Strategies  Uninformed Search Strategies; use only information available in the problem formulation. Breadth-first Uniform-cost Depth-first Depth-limited Iterative deepening

 Uninformed search strategies look for solutions by systematically generating new states and checking each of them against the goal.  This approach is very inefficient in most cases.  Most successor states are “obviously” a bad choice.  Such strategies do not know that because they have minimal problem-specific knowledge.

problem-specific knowledge  Informed search strategies exploit problem-specific knowledge as much as possible to drive the search.  They are almost always more efficient than uninformed searches and often also optimal.  Also called heuristic search.  Uses the knowledge of the problem domain to build an evaluation function f. desirability of expanding  The search strategy: For every node n in the search space, f(n) quantifies the desirability of expanding n, in order to reach the goal. Informed Search Strategies

 Uses the desirability value of the nodes in the fringe to decide which node to expand next.  f is typically an imperfect measure of the goodness of the node. The right choice of nodes is not always the one suggested by f.  It is possible to build a perfect evaluation function, which will always suggest the right choice.  The general approach is called “best-first search”.

Best-First Search most desirable  Choose the most desirable (seemly-best) node for expansion based on evaluation function.  Lowest cost/highest probability evaluation  Implementation;  Fringe is a priority queue in decreasing order of desirability. desirability of node n  Evaluation Function; f(n): select a node for expansion (usually the lowest cost node), desirability of node n.  Heuristic function; h(n): estimates cheapest path cost from node n to a goal node.  g(n) = cost from the initial state to the current state n.

 Best-First Search Strategy;  Pick “best” element of Q. (measured by heuristic value of state)  Add path extensions anywhere in Q. (it may be more efficient to keep the Q ordered in some way, so as to make it easier to find the ‘best’ element)  There are many possible approaches to finding the best node in Q;  Scanning Q to find lowest value,  Sorting Q and picking the first element,  Keeping the Q sorted by doing “sorted” insertions,  Keeping Q as a priority queue.

 Several kinds of best-first search introduced  Greedy best-first search  A* search

Greedy Best-First Search  Estimation function: heuristic function  Expand the node that appears to be closest to the goal, based on the heuristic function only;  f(n) = h(n) = estimate of cost from n to the closest goal  Ex: the straight line distance heuristics  h SLD (n) = straight-line distance from n to Bucharest  Greedy search expands first the node that appears to be closest to the goal, according to h(n).  “greedy” – at each search step the algorithm always tries to get close to the goal as it can.

Romania with step costs in km

 h SLD (In(Arad)) = 366  Notice that the values of h SLD cannot be computed from the problem itself.  It takes some experience to know that h SLD is correlated with actual road distances  Therefore a useful heuristic

Properties of Greedy Search  Complete?  No – can get stuck in loops / follows single path to goal. Ex; Iasi > Neamt > Iasi > Neamt > …  Complete in finite space with repeated-state checking.  Time?  O(b^m) but a good heuristic can give dramatic improvement.  Space?  O(b^m) – keeps all nodes in memory  Optimal?  No.

A* Search  Idea; avoid expanding paths that are already expensive.  Evaluation function: f(n) = g(n) + h(n) with; g(n) – cost so far to reach n h(n) – estimated cost to goal from n f(n) – estimated total cost of path through n to goal  A* search uses an admissible heuristic, that is, h(n)  h*(n) where h*(n) is the true cost from n. Ex: h SLD (n) never overestimates actual road distance.  Theorem: A* search is optimal.

 When h(n) = actual cost to goal  Only nodes in the correct path are expanded  Optimal solution is found  When h(n) < actual cost to goal  Additional nodes are expanded  Optimal solution is found  When h(n) > actual cost to goal  Optimal solution can be overlooked

Optimality of A* (standard proof) 1  Suppose some suboptimal goal G 2 has been generated and is in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G 1.

Consistent Heuristics  A heuristic is consistent if for every node n, every successor n' of n generated by any action a,  h(n) ≤ c(n,a,n') + h(n')  n' = successor of n generated by action a  The estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n‘  If h is consistent, we have  f(n') = g(n') + h(n')  = g(n) + c(n,a,n') + h(n')  ≥ g(n) + h(n)  = f(n)  if h(n) is consistent then the values of f(n) along any path are nondecreasing  Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

Optimality of A* (more useful proof)

f-contours 30 How do the contours look like when h(n) =0? Uniformed search Uniformed search; Bands circulate around the initial state A* search A* search; Bands stretch toward the goal and is narrowly focused around the optimal path if more accurate heuristics were used

Properties of A* Search  Complete?  Yes – unless there are infinitely many nodes with f  f(G).  Time?  O(b^d ) – Exponential. [(relative error in h) x (length of solution)]  Space?  O(b^d) – Keeps all nodes in memory.  Optimal?  Yes – cannot expand f i+1 until f i is finished.

Admissible Heuristics

Relaxed Problem  Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem.  If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution.  If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution.  Key point  Key point; the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem

Dominance  Definition: If h2(n) h1(n) for all n (both admissible) then h2 dominates h1.  For 8-puzzle, h2 indeed dominates h1.  h1(n) = number of misplaced tiles  h2(n) = total Manhattan distance  If h2 dominates h1, then h2 is better for search.  For 8-puzzle, search costs: d = 14 IDS = 3,473,941 nodes (IDS = Iterative Deepening Search) A(h1) = 539 nodesA(h2) = 113 nodes d = 24 IDS 54,000,000,000 nodes A(h1) = 39,135 nodesA(h2) = 1,641 nodes

Memory-Bounded Search  Iterative-Deepening A* Search  Recursive Best-First Search  Simplified Memory-Bounded A* Search

Iterative-Deepening A* Search (IDA*)  The idea of iterative deepening was adapted to the heuristic search context to reduce memory requirements  At each iteration, DFS is performed by using the f-cost (g+h) as the cutoff rather than the depth  Ex; the smallest f-cost of any node that exceeded the cutoff on the previous iteration

Properties of IDA*  IDA* is complete and optimal  Space complexity: O(bf(G)/ δ ) ≈ O(bd)  δ : the smallest step cost  f(G): the optimal solution cost  Time complexity: O( α b^d)  α : the number of distinct f values smaller than the optimal goal  Between iterations, IDA* retains only a single number; the f-cost  IDA* has difficulties in implementation when dealing with real- valued cost

Recursive Best-First Search (RBFS)  Attempt to mimic best-first search but use only linear space  Can be implemented as a recursive algorithm.  Keep track of the f-value of the best alternative path from any ancestor of the current node.  If the current node exceeds the limit, then the recursion unwinds back to the alternative path.  As the recursion unwinds, the f-value of each node along the path is replaced with the best f-value of its children.

Ex: The Route finding Problem

Properties of RBFS  RBFS is complete and optimal  Space complexity: O(bd)  Time complexity : worse case O(b^d)  Depend on the heuristics and frequency of “mind change”  The same states may be explored many times

Simplified Memory-Bounded A* Search (SMA*)  Make use of all available memory M to carry out A*  Expanding the best leaf like A* until memory is full  When full, drop the worst leaf node (with highest f- value)  Like RBFS, backup the value of the forgotten node to its parent if it is the best among the sub-tree of its parent.  When all children nodes were deleted/dropped, put the parent node to the fringe again for further expansion.

Properties of SMA*  Is complete if M ≥ d  Is optimal if M ≥ d  Space complexity: O(M)  Time complexity : worse case O(b ^ d)

Summary  This chapter has examined the application of heuristics to reduce search costs.  We have looked at number of algorithms that use heuristics, and found that optimality comes at a stiff price in terms of search cost, even with good heuristics.  Best-first search is just GENERAL-SEARCH where the minimum- cost nodes (according to some measure) are expanded first.  If we minimize the estimated cost to reach the goal, h(n), we get greedy search. The search time is usually decreased compared to an uninformed algorithm, but the algorithm is neither optimal nor complete.  Minimizing f(n) = g(n) + h(n) combines the advantages of uniform-cost search and greedy search. If we handle repeated states and guarantee that h(n) never overestimates, we get A* search.

 A* is complete, optimal, and optimally efficient among all optimal search algorithms. Its space complexity is still prohibitive.  The time complexity of heuristic algorithms depends on the quality of the heuristic function.  Good heuristics can sometimes be constructed by examining the problem definition or by generalizing from experience with the problem class.  We can reduce the space requirement of A* with memory-bounded algorithms such as IDA* (iterative deepening A*) and SMA* (simplified memory-bounded A*).