Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch. 4.1-4.3.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Lights Out Issues Questions? Comment from me.
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Informed search algorithms
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
Problem Solving by Searching
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Informed search algorithms
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Informed search algorithms
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
Heuristic Search Foundations of Artificial Intelligence.
Artificial intelligence 1: informed search. 2 Outline Informed = use problem-specific knowledge Which search strategies?  Best-first search and its variants.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
Heuristic Search Introduction to Artificial Intelligence
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Artificial Intelligence
Solving Problems by Searching
Presentation transcript:

Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch

Rutgers CS440, Fall 2003 (Blind) Tree search so far Strategies: –BFS:fringe = FIFO –DFS:fringe = LIFO Strategies defined by the order of node expansion function TreeSearch (problem, fringe) n = InitialState (problem); fringe = Insert (n); while (1) If Empty (fringe) return failure; n = RemoveFront (fringe); If Goal (problem,n) return n; fringe = Insert ( Expand (problem,n) ); end

Rutgers CS440, Fall 2003 Example of BFS and DFS in a maze s0s0 G s0s0 G

Rutgers CS440, Fall 2003 Informed searches: best-first search Idea: Add domain-specific information to select the best path to continue searching along. How to do it? –Use evaluation function, f(n), which selects the most “desirable” (best-first) node to expand. –Fringe is a queue sorted in decreasing order of desirability. Ideal case: f(n) = true cost to goal state = t(n) Of course, t(n) is expensive to compute (BFS!) Use an estimate (heuristic) instead

Rutgers CS440, Fall 2003 maze example f(n) = straight-line distance to goal 4 s0s G 1 2

Rutgers CS440, Fall 2003 Romania example

Rutgers CS440, Fall 2003 Greedy search Evaluation function f(n) = h(n) --- heuristic = estimate of cost from n to closest goal Greedy search expands nodes that appear to be closest to goal

Rutgers CS440, Fall 2003 Example of greedy search Arad h = 366 Sibiu h = 253 Timisoara h = 329 Zerind h = 374 Arad h = 366 Fagaras h = 176 Oradea h = 380 Rimnicu V. h = 193 Sibiu h = 253 Bucharest h = 0 Arad h = 366 Sibiu h = 253 Fagaras h = 176 Bucharest h = 0 Is this the optimal solution?

Rutgers CS440, Fall 2003 Properties of greedy search Completeness: –Not complete, can get stuck in loops, e.g., Iasi -> Neamt -> Iasi -> Neamt -> … –Can be made complete in finite spaces with repeated-state checking Time complexity: –O(b m ), but good heuristic can give excellent improvement Space complexity: –O(b m ), keeps all nodes in memory Optimality: –No, can wander off to suboptimal goal states

Rutgers CS440, Fall 2003 A-search Improve greedy search by discouraging wandering-off f(n) = g(n) + h(n) g(n) = cost of path to reach node n h(n) = estimated cost from node n to goal node f(n) = estimated total cost through node n Search along most promising path, not node Is A-search optimal?

Rutgers CS440, Fall 2003 Optimality of A-search Theorem: Let h(n) be at most  higher than t(n), for all n. Than, the A- search will be at most  “steps” longer than the optimal search. Proof: If n is a node on the path found by A-search, then f(n) = g(n) + h(n)  g(n) + t(n) +  = optimal + 

Rutgers CS440, Fall 2003 A* search How to make A-search optimal? Make  = 0. This means heuristic must always underestimate the true cost to goal. h(n) has to be an Optimistic estimate. 0  h(n)  t(n) is called an admissible heuristic. Theorem: Tree A*-search is optimal if h(n) is admissible.

Rutgers CS440, Fall 2003 Example of A* search Sibiu 393 = Timisoara 477 = Zerind 449 = Arad 646 = Fagaras 415 = Oradea 671 = Rimnicu V. 413 = Craiova 526 = Sibiu 553 = Pitesti 417 = Bucharest 450 = Sibiu 591 = Rimnicu V. 607 = Craiova 615 = Bucharest f = Arad h = 366 Arad Sibiu Pitesti Fagaras Bucharest f = Rimnicu V.

Rutgers CS440, Fall 2003 A* optimality proof Suppose there is a suboptimal goal G’ on the queue. Let n be an unexpanded node on a shortest path to optimal goal G. s G’n G f(G’) = g(G’) + h(G’) = g(G’)since h(G’)=0 > g(G)since G’ is suboptimal  f(n) = g(n) + h(n)since h is admissible Hence, f(G’) > f(n), and f(G’) will never be expanded Remember example on the previous page?

Rutgers CS440, Fall 2003 Consistency of A* Consistency condition: h(n) - c(n,n’)  h(n’)where n’ is any successor of n Guarantees optimality of graph search (remember, the proof was for tree search!) Also called monotonicity: f(n’) = g(n’) + h(n’) = g(n) + c(n,n’) + h(n’)  g(n) + h(n) = f(n) f(n’)  f(n) f(n) is monotonically non-decreasing

Rutgers CS440, Fall 2003 Example of consistency/monotonicity A* expands nodes in order of increasing f-value Adds “f-contours” of nodes BFS adds layers

Rutgers CS440, Fall 2003 Properties of A* Completeness: –Yes (for finite graphs) Time complexity: –Exponential in | h(n) - t(n) | x length of solution Memory requirements: –Keeps all nodes in memory Optimal: –Yes

Rutgers CS440, Fall 2003 Admissible heuristics h(n) Straight-line distance for the maze and Romanian examples 8-puzzle: –h 1 (n) = number of misplaced tiles –h 2 (n) = number of squares from desired location of each tile (Manhattan distance) h1(S) = 7 h2(S) = = 14

Rutgers CS440, Fall 2003 Dominance If h 1 (n) and h 2 (n) are both admissible and h 1 (n)  h 2 (n), then h 2 (n) dominates over h 1 (n) Which one is better for search? –h 2 (n), because it is “closer” to t(n) Typical search costs d = 14,IDS = 3,473,941 nodes A*(h 1 ) = 539 A*(h 2 ) = 113 d = 24,IDS ~ 54 billion nodes A*(h 1 ) = A*(h 2 ) = 1641

Rutgers CS440, Fall 2003 Relaxed problems How to derive admissible heuristics? Can be derived from exact solutions to problems that are “simpler” (relaxed) versions of the problem one is trying to solve Examples: –8-puzzle: Tile can move anywhere from initial position (h 1 ) Tiles can occupy same square but have to move one square at a time (h 2 ) –maze: Can move any distance and over any obstacles Important: The cost of optimal solution to relaxed problems is no greater than the optimal solution to the real problem.

Rutgers CS440, Fall 2003 Local search In many problems one does not care about the path, rather one wants to find the goal state (based on a goal condition). Use local search / iterative improvement algorithms: –Keep a single “current” state, try to improve it.

Rutgers CS440, Fall 2003 Example N-queens problem: from initial configuration move to other configurations such that the number of conflicts is reduced

Rutgers CS440, Fall 2003 Hill-climbing Gradient ascent / descent Goal: n* = arg max Value( n ) function HillClimbing (problem) n = InitialState (problem); while (1) neighbors = Expand (problem,n); n* = arg max Value (neighbors); If Value (n*) < Value (n), return n; n = n*; end

Rutgers CS440, Fall 2003 Hill climbing (cont’d) Problem: Depending on initial state, can get stuck in local maxima (minima), ridges, plateaus

Rutgers CS440, Fall 2003 Beam search Problem of local minima (maxima) in hill-climbing can be alleviated by starting HC from multiple random starting points Or make it stochastic (Stochastic HC) by choosing successors at random, based on how “good” they are Local beam search: somewhat similar to random-restart HC: –Start from N initial states. –Expand all N states and keep N best successors. Stochastic beam search: stochastic version of LBS, similar to SHC.

Rutgers CS440, Fall 2003 Simulated annealing Idea: –Allow bad moves, initially more, later fewer –Analogy with annealing in metallurgy function SimulatedAnnealing (problem,schedule) n = InitialState (problem); t = 1; while (1) T = schedule(t); neighbors = Expand (problem,n); n’ = Random (neighbors);  V = Value (n’) - Value (n); If  V > 0, n* = n; Else n* = n’ with probability exp (  V/T); t = t + 1; end

Rutgers CS440, Fall 2003 Properties of simulated annealing At fixed temperature T, state occupation probability reaches Boltzman distribution, exp( V(n)/T ) Devised by Metropolis et al. in 1953

Rutgers CS440, Fall 2003 Genetic algorithms