Informed Search Methods

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Review: Search problem formulation
Local Search Algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 4 Jim Martin.
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Review: Search problem formulation
Informed (Heuristic) Search Evaluation Function returns a value estimating the desirability of expanding a frontier node Two Basic Approaches –Expand node.
UnInformed Search What to do when you don’t know anything.
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Informed Search CSE 473 University of Washington.
Problem Solving and Search in AI Heuristic Search
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Heuristic Search Heuristic - a “rule of thumb” used to help guide search often, something learned experientially and recalled when needed Heuristic Function.
Informed Search Idea: be smart about what paths to try.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
For Friday Finish chapter 6 Program 1, Milestone 1 due.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
Local Search and Optimization Presented by Collin Kanaley.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Informed Search chapter 4. Informed Methods Add Domain-Specific Information Add domain-specific information to select what is the best path to continue.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
Local Search Algorithms and Optimization Problems
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Last time: Problem-Solving
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Department of Computer Science
Local Search Algorithms
Artificial Intelligence Problem solving by searching CSC 361
Artificial Intelligence (CS 370D)
Hill-climbing Search Goal: Optimizing an objective function.
Artificial Intelligence Informed Search Algorithms
Heuristics Local Search
Lecture 9 Administration Heuristic search, continued
Local Search Algorithms
Local Search Algorithms
Presentation transcript:

Informed Search Methods Read Chapter 4 Use text for more Examples: work them out yourself

Best First Store is replaced by sorted data structure Knowledge added by the “sort” function No guarantees yet – depends on qualities of the evaluation function ~ Uniform Cost with user supplied evaluation function.

Uniform Cost Now assume edges have positive cost Storage = Priority Queue: scored by path cost or sorted list with lowest values first Select- choose minimum cost add – maintains order Check: careful – only check minimum cost for goal Complete & optimal Time & space like Breadth.

Uniform Cost Example Root – A cost 1 Root – B cost 3 A -- C cost 4 B – C cost 1 C is goal state. Why is Uniform cost optimal? Expanded does not mean checked node.

Watch the queue R/0 // Path/path-cost R-A/1, R-B/3 R-B/3, R-A-C/5: Note: you don’t test expanded node You put it in the queue R-B-C/4, R-A-C/5

Concerns What knowledge is available? How can it be added to the search? What guarantees are there? Time Space

Greedy/Hill-climbing Search Adding heuristic h(n) h(n) = estimated cost of cheapest solution from state n to the goal Require h(goal) = 0. Complete – no; can be mislead.

Examples: Route Finding: goal from A to B 8-tile puzzle: straight-line distance from current to B 8-tile puzzle: number of misplaced tiles number and distance of misplaced tiles

A* Combines greedy and Uniform cost f(n) = g(n)+h(n) where g(n) = current path cost to node n h(n) = estimated cost to goal If h(n) <= true cost to goal, then admissible. Best-first using admissible f is A*. Theorem: A* is optimal and complete

Admissibility? Route Finding: goal from A to B 8-tile puzzle: straight-line distance from current to B Less than true distance? 8-tile puzzle: number of misplaced tiles Less than number of moves? number and distance of misplaced tiles

A* Properties Dechter and Pearl: A* optimal among all algorithms using h. (Any algorithm must search at least as many nodes). If 0<=h1 <= h2 and h2 is admissible, then h1 is admissible and h1 will search at least as many nodes as h2. So bigger is better. Sub exponential if h estimate error is within (approximately) log of true cost.

A* special cases Suppose h(n) = 0. => Uniform Cost Suppose g(n) = 1, h(n) = 0 => Breadth First If non-admissible heuristic g(n) = 0, h(n) = 1/depth => depth first One code, many algorithms

Heuristic Generation Relaxation: make the problem simpler Route-Planning don’t worry about paths: go straight 8-tile puzzle don’t worry about physical constraints: pick up tile and move to correct position better: allow sliding over existing tiles TSP MST, lower bound on tour Should be easy to compute

Iterative Deepening A* Like iterative deepening, but: Replaces depth limit with f-cost Increase f-cost by smallest operator cost. Complete and optimal

SMA* Memory Bounded version due to authors Beware authors. SKIP

Hill-climbing Goal: Optimizing an objective function. Does not require differentiable functions Can be applied to “goal” predicate type of problems. BSAT with objective function number of clauses satisfied. Intuition: Always move to a better state

Some Hill-Climbing Algo’s Start = random state or special state. Until (no improvement) Steepest Ascent: find best successor OR (greedy): select first improving successor Go to that successor Repeat the above process some number of times (Restarts). Can be done with partial solutions or full solutions.

Hill-climbing Algorithm In Best-first, replace storage by single node Works if single hill Use restarts if multiple hills Problems: finds local maximum, not global plateaux: large flat regions (happens in BSAT) ridges: fast up ridge, slow on ridge Not complete, not optimal No memory problems 

Beam Mix of hill-climbing and best first Storage is a cache of best K states Solves storage problem, but… Not optimal, not complete

Local (Iterative) Improving Initial state = full candidate solution Greedy hill-climbing: if up, do it if flat, probabilistically decide to accept move if down, don’t do it We are gradually expanding the possible moves.

Local Improving: Performance Solves 1,000,000 queen problem quickly Useful for scheduling Useful for BSAT solves (sometimes) large problems More time, better answer No memory problems No guarantees of anything

Simulated Annealing Like hill-climbing, but probabilistically allows down moves, controlled by current temperature and how bad move is. Let t[1], t[2],… be a temperature schedule. usually t[1] is high, t[k] = 0.9*t[k-1]. Let E be quality measure of state Goal: maximize E.

Simulated Annealing Algorithm Current = random state, k = 1 If T[k] = 0, stop. Next = random next state If Next is better than start, move there. If Next is worse: Let Delta = E(next)-E(current) Move to next with probabilty e^(Delta/T[k]) k = k+1

Simulated Annealing Discussion No guarantees When T is large, e^delta/t is close to e^0, or 1. So for large T, you go anywhere. When T is small, e^delta/t is close to e^-inf, or 0. So you avoid most bad moves. After T becomes 0, one often does simple hill-climbing. Execution time depends on schedule; memory use is trivial.

Genetic Algorithm Weakly analogous to “evolution” No theoretic guarantees Applies to nearly any problem. Population = set of individuals Fitness function on individuals Mutation operator: new individual from old one. Cross-over: new individuals from parents

GA Algorithm (a version) Population = random set of n individuals Probabilistically choose n pairs of individuals to mate Probabilistically choose n descendants for next generation (may include parents or not) Probability depends on fitness function as in simulated annealing. How well does it work? Good question 

Scores to Probabilities Suppose the scores of the n individuals are: a[1], a[2],….a[n]. The probability of choosing the jth individual prob = a[j]/(a[1]+a[2]+….a[n]).

GA Example Problem Boolean Satisfiability. Individual = bindings for variables Mutation = flip a variable Cross-over = For 2 parents, randomly positions from 1 parent. For one son take those bindings and use other parent for others. Fitness = number of clauses solved.

GA Example N-queens problem Individual: array indicating column where ith queen is assigned. Mating: Cross-over Fitness (minimize): number of constraint violations

GA Function Optimization Ex. Let f(x,y) be the function to optimize. Domain for x and y is real number between 0 and 10. Say the hidden function is: f(x,y) = 2 if x> 9 & y>9. f(x,y) = 1 if x>9 or y>9 f(x,y) = 0 otherwise.

GA Works Well here. Individual = point = (x,y) Mating: something from each so: mate({x,y},{x’,y’}) is {x,y’} and {x’,y}. No mutation Hill-climbing does poorly, GA does well. This example generalizes functions with large arity.

GA Discussion Reported to work well on some problems. Typically not compared with other approaches, e.g. hill-climbing with restarts. Opinion: Works if the “mating” operator captures good substructures. Any ideas for GA on TSP?