Pattern Databases Robert Holte University of Alberta November 6, 2002.

Slides:



Advertisements
Similar presentations
Finding Optimal Solution of 15 puzzle B NTUCSIE Kai-Yung Chiang.
Advertisements

Heuristic Functions By Peter Lane
1 Dual lookups in Pattern Databases Ariel Felner, Ben-Gurion Univ. Israel Uzi Zahavi, Bar-Ilan Univ. Israel Jonathan Schaeffer, Univ. of Alberta, Canada.
Informed Search Algorithms
Informed search strategies
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Inconsistent Heuristics
Finding Search Heuristics Henry Kautz. if State[node] is not in closed OR g[node] < g[LookUp(State[node],closed)] then A* Graph Search for Any Admissible.
Heuristics Some further elaborations of the art of heuristics and examples.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Searching for Macro-operators with Automatically Generated Heuristics István T. Hernádvölgyi University of Ottawa
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Recent Progress in the Design and Analysis of Admissible Heuristic Functions Richard E. Korf Computer Science Department University of California, Los.
Additive pattern database heuristics
Review: Search problem formulation
Research Related to Real-Time Strategy Games Robert Holte November 8, 2002.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
9/5 9/5: (today) Lisp Assmt due 9/6: 3:30pm: Lisp Recitation [Lei] 9/7:~6pm: HW/Class recitation [Will] 9/12: HW1 Due.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Compressing a Single PDB Presented by: Danielle Sauer CMPUT 652 Project December 1, 2004.
Dovetail Killer? Implementing Jonathan’s New Idea Tarek Sherif.
Combining Front-to-End Perimeter Search and Pattern Databases CMPUT 652 Eddie Rafols.
9/10  Name plates for everyone!. Blog qn. on Dijkstra Algorithm.. What is the difference between Uniform Cost Search and Dijkstra algorithm? Given the.
Non-Conservative Cost Bound Increases in IDA* Doug Demyen.
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
Informed Search Idea: be smart about what paths to try.
Using Abstraction to Speed Up Search Robert Holte University of Ottawa.
Informed (Heuristic) Search
Li Wang Haorui Wu University of South Carolina 04/02/2015 A* with Pattern Databases.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CSE 473: Artificial Intelligence Spring 2012
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Monday Read chapter 4, section 1 No homework..
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Heuristic Search Andrea Danyluk September 16, 2013.
Neural Heuristics For Problem Solving: Using ANNs to Develop Heuristics for the 8-Puzzle by Bambridge E. Peterson.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
CSE 573: Artificial Intelligence Autumn2012 Heuristics & Pattern Databases for Search With many slides from Dan Klein, Richard Korf, Stuart Russell, Andrew.
Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Problem Spaces & Search Dan Weld CSE 573. © Daniel S. Weld 2 Logistics Read rest of R&N chapter 4 Also read chapter 5 PS 1 will arrive by , so be.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Heuristic Functions.
Heuristic Search Planners. 2 USC INFORMATION SCIENCES INSTITUTE Planning as heuristic search Use standard search techniques, e.g. A*, best-first, hill-climbing.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Review: Tree search Initialize the frontier using the starting state
Finding Optimal Solutions to Rubik's Cube Using Pattern Databases
Heuristic Functions.
Logistics Problem Set 1 Office Hours Reading Mailing List Dan’s Travel
Abstraction Transformation & Heuristics
Artificial Intelligence Problem solving by searching CSC 361
Finding Heuristics Using Abstraction
Heuristics Local Search
Heuristics Local Search
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Informed Search Idea: be smart about what paths to try.
Supplemental slides for CSE 327 Prof. Jeff Heflin
Presentation transcript:

Pattern Databases Robert Holte University of Alberta November 6, 2002

Pattern Database Successes (1) Joe Culberson & Jonathan Schaeffer (1994). –15-puzzle (10 13 states). –2 hand-crafted patterns (“fringe” (FR) and “corner” (CO)) –Each PDB contains over 500 million entries (< 10 9 abstract states). –Used symmetries to compress and enhance the use of PDBs –Used in conjunction with Manhattan Distance (MD) Reduction in size of search tree: –MD = 346 * max(MD,FR) –MD = 437 * max(MD,CO) –MD = 1038 * max(MD, interleave(FR,CO))

Pattern Database Successes (2) Rich Korf (1997) –Rubik’s Cube (10 19 states). –3 hand-crafted patterns, all used together (max) –Each PDB contains over 42 million entries –took 1 hour to build all the PDBs Results: –First time random instances had been solved optimally –Hardest (solution length 18) took 17 days –Best known MD-like heuristic would have taken a century

Pattern Database Successes (3) Stefan Edelkamp (2001) –Planning benchmarks: e.g. logistics, Blocks world –Automatically generated PDBs (not domain abstraction) –Additive pattern databases (in some cases) Results: –PDB competitive with the best planners –logistics domain (weighted A*), PDB run-time 100 times smaller than FF heuristic

Pattern Database Successes (4) Istvan Hernadvolgyi (2001) –Macro-operators are concatenated to very quickly construct suboptimal solutions –For Rubik’s Cube hundreds of macro-operators are needed –Each macro is found by searching in the Rubik’s Cube state space with a macro-specific “subgoal” and start state –For every one of these searches, a PDB was generated automatically (domain abstraction) so that an optimal-length macro could be found quickly Results: –Optimal-length macros for all subgoals found for the first time –So quick that it permitted subgoals to be merged –This shortened solutions from 90 moves to 50 (optimal is ~18)

Fundamental Questions How to invent effective heuristics ? How to use memory to speed up search ? Create a simplified version of your problem. Use the exact distances in the simplified version as heuristic estimates in the original. Precompute all distances-to-goal in the simplified version of the problem and store them in a lookup table (pattern database).

Example: 8-puzzle Domain = blank ,440 states

“Patterns” created by domain mapping This mapping produces 9 patterns Domain = blank Abstract = blank corresponding patternoriginal state

Pattern Database Pattern Distance to goal Pattern Distance to goal 3 3 4

Calculating h(s) Given a state in the original problem Compute the corresponding pattern and look up the abstract distance-to-goal Heuristics defined by PDBs are consistent, not just admissible.

Abstract Space

Efficiency Time for the preprocessing to create a PDB is usually negligible compared to the time to solve one problem-instance with no heuristic. Memory is the limiting factor.

“Pattern” = leave some tiles unique patterns Domain = blank Abstract = blank 6 7 8

Domain Abstraction ,240 patterns Domain = blank Abstract = blank 6 7 8

8-puzzle PDB sizes (with the blank left unique)

Automatic Creation of Domain Abstractions Easy to enumerate all possible domain abstractions They form a lattice, e.g. is “more abstract” than the domain abstraction above Domain = blank Abstract = blank Domain = blank Abstract = blank

Problem: Non-surjectivity

Domain = blank 1 2 Abstract = blank 1 blank

Problem: Non-surjectivity Domain = blank 1 2 Abstract = blank 1 blank

Problem: Non-surjectivity Domain = blank 1 2 Abstract = blank 1 blank

Problem: Non-surjectivity ?? Domain = blank 1 2 Abstract = blank 1 blank

Pattern Database Experiments Aim: To understand how search performance using PDBs is related to easily measurable characteristics of the PDBs e.g. size, average value Basic Method: Choose a variety of state spaces. For each state space generate thousands of PDBs. For each PDB, measure its characteristics and the performance of A* (IDA* etc.) using it.

8-puzzle: A* vs. PDB size # nodes expanded (A*) pattern database size (# of abstract states)

Korf & Reid (1998) When the depth bound is d, node n at level j will be expanded by IDA* iff [a] parent(n) was expanded [b] g(n)+h(n)  d, in other words h(n)  d-j [b]  [a] if the heuristic is consistent Total nodes expanded =  N(j)*P(j,d-j) –N(j) = # nodes at level j in the brute-force tree –P(j,x) = percentage of nodes at level j with h()  x

Korf & Reid – experiment In their 8-puzzle experiment: –Use exact N(j) –Approximate P(j,x) by EQ(x) = limit (j  ) P(j,x) –IDA*, but complete enumeration of last level –Run all 181,400 start states to all depths

Korf & Reid – results Seems the ideal tool for choosing which of two PDBs is better… dpredictionavg. error (all states)

Korf & Reid – stopping at goal dpredictionavg. error (all states) avg. error (states  d ) For choosing which of two PDBs is better in a practical setting, adaptations are needed.

Using Multiple Abstractions Given 2 consistent heuristics, max(h1(s),h2(s)) is also consistent. In some circumstances, can add them. How good is max ? –hope it is at least 2x because it takes 2x the space

Max of 2 random PDBs max(h1,h2) worse than h1

Instead of max - interleave use PDB 1 use PDB 1 use PDB 1 use PDB 1 use PDB 1 use PDB 1 use PDB 1 use PDB 1 use PDB 1 use PDB 1 use PDB 2 use PDB 2 use PDB 2 use PDB 2 use PDB 2 use PDB 2 use PDB 2 use PDB 2

Interleaved Pattern Databases The hope: almost as good as max, but only half the memory. Intuitively, strict alternation between PDBs expected to be almost as good as max. How to generalize this to any abstraction of any space ?

2 random PDBs interleaved 93 random pairs (with non-trivial LCA) 4 had Max(h1,h2) > h1 17 others had Interleave(h1,h2) > h1 The remaining 72 were “normal” Max Interleave h1

Relative Performance Max Interleave h1

Current Research Istvan Hernadvolgyi (Ph.D. student, U. Ottawa) –automatic creation of good pattern databases –adaptation to weighted graphs Project Students (U of A) Jack Newton –max of two pattern databases –interleaved pattern databases Daniel Neilson - additive abstractions Ajit Singh – predicting IDA* performance

Future Research compression of pattern databases understand & avoid non-surjectivity alternative methods of abstraction projection