Heuristics Local Search

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Informed search strategies
Local Search Algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
Informed search algorithms
Problem Solving by Searching
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Informed Search CSE 473 University of Washington.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed search algorithms
Local Search and Optimization
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
Informed search algorithms
January 31, 2006AI: Chapter 4: Informed Search and Exploration 1 Artificial Intelligence Chapter 4: Informed Search and Exploration Michael Scherger Department.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
CSE 473: Artificial Intelligence Spring 2012
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
1 CS 2710, ISSP 2610 Chapter 4, Part 2 Heuristic Search.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSE 573: Artificial Intelligence Autumn2012 Heuristics & Pattern Databases for Search With many slides from Dan Klein, Richard Korf, Stuart Russell, Andrew.
1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Problem Spaces & Search Dan Weld CSE 573. © Daniel S. Weld 2 Logistics Read rest of R&N chapter 4 Also read chapter 5 PS 1 will arrive by , so be.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Heuristics & Constraint Satisfaction CSE 573 University of Washington.
Local Search Algorithms and Optimization Problems
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Informed Search Uninformed searches Informed searches easy
Informed Search Methods
Review: Tree search Initialize the frontier using the starting state
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Logistics Problem Set 1 Mailing List Due 10/13 at start of class
Logistics Problem Set 1 Office Hours Reading Mailing List Dan’s Travel
Local Search Algorithms
Artificial Intelligence (CS 370D)
Artificial Intelligence Informed Search Algorithms
More on Search: A* and Optimization
Heuristics Local Search
Lecture 9 Administration Heuristic search, continued
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Local Search Algorithms
Heuristic Search Generate and Test Hill Climbing Best First Search
Search.
Search.
Midterm Review.
CSC 380: Design and Analysis of Algorithms
Local Search Algorithms
Presentation transcript:

Heuristics Local Search CSE 473 University of Washington

Admissable Heuristics f(x) = g(x) + h(x) g: cost so far h: underestimate of remaining costs Where do heuristics come from? © D. Weld, D. Fox

Relaxed Problems Derive admissible heuristic from exact cost of a solution to a relaxed version of problem For transportation planning, relax requirement that car has to stay on road  Euclidean dist For blocks world, distance = # move operations heuristic = number of misplaced blocks What is relaxed problem? # out of place = 2, true distance to goal = 3 Cost of optimal soln to relaxed problem < cost of optimal soln for real problem © D. Weld, D. Fox

Simplifying Integrals vertex = formula goal = closed form formula without integrals arcs = mathematical transformations heuristic = number of integrals still in formula what is being relaxed? © D. Weld, D. Fox

Traveling Salesman Problem Problem Space Heuristic? States = partial path (not nec. connected) Operator = add an edge Start state = empty path Goal = complete path What can be Relaxed? © D. Weld, D. Fox

Heuristics for eight puzzle 7 2 3 8 3 5 1 6 1 2 3 7 8 4 5 6  start goal What can we relax? © D. Weld, D. Fox

Importance of Heuristics 7 2 3 8 5 4 1 6 h1 = number of tiles in wrong place h2 = distances of tiles from correct loc D IDS A*(h1) A*(h2) 2 10 6 6 4 112 13 12 6 680 20 18 8 6384 39 25 10 47127 93 39 12 364404 227 73 14 3473941 539 113 18 3056 363 24 39135 1641 © D. Weld, D. Fox

Need More Power! Performance of Manhattan Distance Heuristic 8 Puzzle < 1 second 15 Puzzle 1 minute 24 Puzzle 65000 years Need even better heuristics! Adapted from Richard Korf presentation © D. Weld, D. Fox

Pattern Databases Pick any subset of tiles Precompute a table [Culberson & Schaeffer 1996] Pick any subset of tiles E.g., 3, 7, 11, 12, 13, 14, 15 Precompute a table Optimal cost of solving just these tiles For all possible configurations 57 Million in this case Use breadth first search back from goal state State = position of just these tiles (& blank) Adapted from Richard Korf presentation © D. Weld, D. Fox

Using a Pattern Database As each state is generated Use position of chosen tiles as index into DB Use lookup value as heuristic, h(n) Admissible? Adapted from Richard Korf presentation © D. Weld, D. Fox

Combining Multiple Databases Can choose another set of tiles Precompute multiple tables How combine table values? E.g. Optimal solutions to Rubik’s cube First found w/ IDA* using pattern DB heuristics Multiple DBs were used (dif subsets of cubies) Most problems solved optimally in 1 day Compare with 574,000 years for IDDFS Adapted from Richard Korf presentation © D. Weld, D. Fox

Drawbacks of Standard Pattern DBs Since we can only take max Diminishing returns on additional DBs Would like to be able to add values Adapted from Richard Korf presentation © D. Weld, D. Fox

Disjoint Pattern DBs Partition tiles into disjoint sets During search 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 Partition tiles into disjoint sets For each set, precompute table E.g. 8 tile DB has 519 million entries And 7 tile DB has 58 million During search Look up heuristic values for each set Can add values without overestimating! Manhattan distance is a special case of this idea where each set is a single tile Adapted from Richard Korf presentation © D. Weld, D. Fox

Performance 15 Puzzle: 2000x speedup vs Manhattan dist IDA* with the two DBs shown previously solves 15 Puzzles optimally in 30 milliseconds 24 Puzzle: 12 million x speedup vs Manhattan IDA* can solve random instances in 2 days. Requires 4 DBs as shown Each DB has 128 million entries Without PDBs: 65000 years Adapted from Richard Korf presentation © D. Weld, D. Fox

Local Search Algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n-queens In such cases, we can use local search algorithms that keep a single "current" state, try to improve it © D. Weld, D. Fox

Hill Climbing Idea Problems? Local maxima Plateaus Diagonal ridges “Gradient ascent” Idea Always choose best child; no backtracking Beam search with |queue| = 1 Problems? Local maxima Plateaus Diagonal ridges © D. Weld, D. Fox

Stochastic Hill Climbing Randomly disobeying heuristic Random restarts © D. Weld, D. Fox

Simulated Annealing Objective: avoid local minima Technique: For the most part use hill climbing When no improvement possible Choose random neighbor Let a be the decrease in quality Move to neighbor with probability e -a/T Reduce “temperature” (T) over time Pros & cons Optimal? If T decreased slowly enough, will reach optimal state Widely used See also WalkSAT temp © D. Weld, D. Fox

Local Beam Search Idea Evaluation Best first but only keep N best items on priority queue Evaluation Complete? Time Complexity? Space Complexity? No O(b^d) O(b + N) © D. Weld, D. Fox

Genetic Algorithms Start with random population Produce new generation Representation serialized States are ranked with “fitness function” Produce new generation Select random pair(s): probability ~ fitness Randomly choose “crossover point” Offspring mix halves Randomly mutate bits © D. Weld, D. Fox

Genetic algorithms Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc © D. Weld, D. Fox

Genetic algorithms © D. Weld, D. Fox