Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Informed Search Algorithms
Local Search Algorithms
Informed search algorithms
Informed search algorithms
Solving Problem by Searching
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Problem Solving by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Informed search algorithms
Artificial Intelligence
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Trading optimality for speed…
CSC344: AI for Games Lecture 4: Informed search
PROBLEM SOLVING BY SEARCHING (2)
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Informed search algorithms
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Artificial Intelligence for Games Online and local search
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed search algorithms Chapter 4 Slides derived in part from converted to powerpoint by Min-Yen.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Last time: Problem-Solving
Local Search Algorithms
Artificial Intelligence (CS 370D)
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
Informed search algorithms
Artificial Intelligence
Informed search algorithms
Artificial Intelligence
Local Search Algorithms
Artificial Intelligence
Local Search Algorithms
Presentation transcript:

Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1

Outline Best-first searchBest-first search Greedy best-first searchGreedy best-first search A * searchA * search HeuristicsHeuristics Local search algorithmsLocal search algorithms Hill-climbing searchHill-climbing search Simulated annealing searchSimulated annealing search Local beam searchLocal beam search Genetic algorithmsGenetic algorithms

Informed Search Idea: use problem specific knowledge to pick which node to expand Typically involves a heuristic function h(n) estimating the cheapest path from Typically involves a heuristic function h(n) estimating the cheapest path from n.STATE to a goal state A search strategy is defined by picking the order of node expansion 3

Best-first search We can use it to direct our search towards the goal.We can use it to direct our search towards the goal. Best first search simply chooses the unvisited node with the best heuristic value to visit next. Best first search simply chooses the unvisited node with the best heuristic value to visit next. It can be implemented in the same algorithm as lowest-cost Breadth First Search. It can be implemented in the same algorithm as lowest-cost Breadth First Search. Idea: use an evaluation function f(n) for each nodeIdea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:Implementation: Order the nodes in fringe in decreasing order of desirability Special cases: –greedy best-first search –A * search 4

Romania with step costs in km

Greedy best-first search Evaluation function f(n) = h(n) (heuristic)Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., h SLD (n) = straight-line distance from n to Buchareste.g., h SLD (n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goalGreedy best-first search expands the node that appears to be closest to goal

Greedy best-first search example

Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt  Time? O(b m ), but a good heuristic can give dramatic improvementTime? O(b m ), but a good heuristic can give dramatic improvement Space? O(b m ) -- keeps all nodes in memory Optimal? NoSpace? O(b m ) -- keeps all nodes in memory Optimal? No

A * search Idea: avoid expanding paths that are already expensiveIdea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n)Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach ng(n) = cost so far to reach n h(n) = estimated cost from n to goalh(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goalf(n) = estimated total cost of path through n to goal

A * search example

Admissible heuristics A heuristic h(n) is admissible if for every node n,A heuristic h(n) is admissible if for every node n, h(n) ≤ h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimisticAn admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: h SLD (n) (never overestimates the actual road distance)Example: h SLD (n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A * using TREE- SEARCH is optimalTheorem: If h(n) is admissible, A * using TREE- SEARCH is optimal

Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) )Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) Time? ExponentialTime? Exponential Space? Keeps all nodes in memorySpace? Keeps all nodes in memory Optimal? YesOptimal? Yes

Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solutionIn many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete" configurationsState space = set of "complete" configurations Find configuration satisfying constraints, e.g., n- queensFind configuration satisfying constraints, e.g., n- queens In such cases, we can use local search algorithmsIn such cases, we can use local search algorithms keep a single "current" state, try to improve it.keep a single "current" state, try to improve it. Very memory efficient (only remember current state)Very memory efficient (only remember current state)

Hill-climbing search “ a loop that continuously moves in the direction of increasing value”“ a loop that continuously moves in the direction of increasing value” –terminates when a peak is reached Value can be eitherValue can be either –Objective function value –Heuristic function value (minimized) Hill climbing does not look ahead of the immediate neighbors of the current state.Hill climbing does not look ahead of the immediate neighbors of the current state. Can randomly choose among the set of best successors, if multiple have the best value.Can randomly choose among the set of best successors, if multiple have the best value. Characterized as “trying to find the top of Mount Everest while in a thick fog”Characterized as “trying to find the top of Mount Everest while in a thick fog” 22

Example: n-queens Put n queens on an n × n board with no two queens on the same row, column, or diagonalPut n queens on an n × n board with no two queens on the same row, column, or diagonal 23

Hill-climbing search

Hill climbing and local maxima When local maxima exist, hill climbing is suboptimalWhen local maxima exist, hill climbing is suboptimal Local maxima: a local maximum, as opposed to a global maximum, is a peak that is lower than the highest peak in the state space.Local maxima: a local maximum, as opposed to a global maximum, is a peak that is lower than the highest peak in the state space. Once on a local maximum, the algorithm will halt Once on a local maximum, the algorithm will halt even though the solution may be far from satisfactory.even though the solution may be far from satisfactory. Simple (often effective) solutionSimple (often effective) solution –Multiple random restarts 25

Hill-climbing search: 8-queens problem A local minimum with h = 1

Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequencyIdea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

Properties of simulated annealing search One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etcWidely used in VLSI layout, airline scheduling, etc

Local beam search Keep track of k states rather than just oneKeep track of k states rather than just one Start with k randomly generated statesStart with k randomly generated states At each iteration, all the successors of all k states are generatedAt each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat.If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

Genetic algorithms A successor state is generated by combining two parent statesA successor state is generated by combining two parent states Start with k randomly generated states (population)Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s)A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states.Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutationProduce the next generation of states by selection, crossover, and mutation

Genetic algorithms Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) 24/( ) = 31%24/( ) = 31% 23/( ) = 29% etc23/( ) = 29% etc

Genetic algorithms