Heuristic Functions By Peter Lane

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Heuristic Search Russell and Norvig: Chapter 4 Slides adapted from:
Finding Optimal Solution of 15 puzzle B NTUCSIE Kai-Yung Chiang.
Informed Search CS 171/271 (Chapter 4)
Informed search algorithms
Heuristic Searches. Feedback: Tutorial 1 Describing a state. Entire state space vs. incremental development. Elimination of children. Closed and the solution.
Review: Search problem formulation
Informed Search Algorithms
Notes Dijstra’s Algorithm Corrected syllabus.
Lecture 4: Informed/Heuristic Search
Informed search strategies
Informed search algorithms
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
The A* Algorithm Héctor Muñoz-Avila. The Search Problem Starting from a node n find the shortest path to a goal node g ?
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Informed Search Strategies
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Informed search algorithms
Heuristics Some further elaborations of the art of heuristics and examples.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Review: Search problem formulation
Informed Search Strategies Tutorial. Heuristics for 8-puzzle These heuristics were obtained by relaxing constraints … (Explain !!!) h1: The number of.
Problem Solving and Search in AI Heuristic Search
Informed search algorithms
P ROBLEM Write an algorithm that calculates the most efficient route between two points as quickly as possible.
Informed (Heuristic) Search
Informed search algorithms
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
1 CO Games Development 1 Week 8 A* Gareth Bellaby.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Functions.
Ch. 4 – Informed Search Supplemental slides for CSE 327 Prof. Jeff Heflin.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
CPSC 322, Lecture 7Slide 1 Heuristic Search Computer Science cpsc322, Lecture 7 (Textbook Chpt 3.6) Sept, 20, 2013.
Computer Science cpsc322, Lecture 7
Review: Tree search Initialize the frontier using the starting state
Heuristic Functions.
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Artificial Intelligence Problem solving by searching CSC 361
Informed search algorithms
Informed search algorithms
Informed Search Idea: be smart about what paths to try.
Heuristic Search Generate and Test Hill Climbing Best First Search
Informed Search Idea: be smart about what paths to try.
Informed Search.
Presentation transcript:

Heuristic Functions By Peter Lane

Heuristics are: strategies using readily accessible information to control problem-solving algorithms.

Earliest search problem, the 8-puzzle A solution for randomly generated 8-puzzle problem is about 22 steps with a branching factor of 3. Meaning that a search of depth 22 would be about 3^22 The problem can be easily simplified with a good heuristic

When solving a problem with, for example, A When solving a problem with, for example, A*, we will need a heuristic function that does not overestimate the number of steps to the goal. Here are two candidates for the 8-puzzle solution: h1= The number of misplaced tiles. h2= The sum of the distances of the tiles from their goal positions. Since no diagonal movement, distance is counted by sum of horizontal and vertical distances; also known as Manhattan distance.

characterizing the quality of heuristic The effective Branching factor characterize the quality of heuristic N= total number of nodes generated by A* d= solution depth b* is the branching factor that a tree of depth d would have in order to contain N + 1 nodes N + 1 = 1 + b* + (b*)^2 + … + (b*) A well designed heuristic would have a value of b* close to 1.

When testing the two example heuristics we find: h1 has better results that h2 h2 dominates h1 h2(n)  h1(n). A* using h2 will never expand more nodes than A* using h1 It is better to use a heuristic function with higher values, without overestimating the number of steps.

How to invent admissible heuristics Relaxed problem: a problem with fewer restrictions on the actions. It is admissible because the optimal solution in the original is also a solution in the relaxed problem. Must be at least as expensive as the optimal solution in the relaxed problem. Example: if the problems action were limited to simply “A tile can move from square A to square B”, then h1 would be the optimal heuristic since each square could move to their intended spot in one step. If the relaxed problem is hard to solve, then the values of the heuristic will be expensive.

Continuing on inventing heuristics Subproblems: Example; getting tiles 1,2,3,4 of the 8-puzzle, into there correct position, ignoring the other tiles. The cost for this solution is much less than that of the complete problem. Pattern database stores the solution cost for each sub problem instance. We then compute a heuristic for each complete state during a search by looking up the corresponding subproblem configuration in the database.

Learning from “Experience” In our example, “experience means solving lots of 8-puzzles.” The solutions to previous problems can provide examples from which h(n) can be learned. Consists of a state from the solution path and cost of the solution from that point.

Inductive learning algorithms can be used to construct a function h(n) that can predict solution costs for other states that arise during search. They work better when supplied with features of a state that are relevant to its evaluation. One example is “number of misplaced tiles” might help predict the distances of a state from the goal. x1(n). Another example feature, x2(n), would be “number of pairs adjacent tiles that are also adjacent in the goal state.” x1(n) and x2(n) can be also be combined to predict h(n). h(n) = c1 X x1(n) + c2 X x2(n) Constants c1 and c2 are adjusted to give the best fit to the actual data on solution costs. c1 should be positive and c2 should be negative.

Quick Summary Heuristics are strategies used for problem solving algorithms. Heuristics are evaluated by the effective branching factor. There are several way to help invent a heuristic Relaxed problems Pattern database of subproblems Learning from Experience

~fin