1 CS 2710, ISSP 2610 Chapter 4, Part 2 Heuristic Search.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search and Optimization
Local Search Algorithms
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Trading optimality for speed…
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
PROBLEM SOLVING BY SEARCHING (2)
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 5 Jim Martin.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
1 CS 2710, ISSP 2610 R&N Chapter 4.1 Local Search and Optimization.
Informed Search Methods
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Artificial Intelligence for Games Online and local search
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Chapter 4 Informed Search and Exploration. Outline Informed (Heuristic) search strategies  (Greedy) Best-first search  A* search (Admissible) Heuristic.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Informed search algorithms Chapter 4 Slides derived in part from converted to powerpoint by Min-Yen.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Department of Computer Science Lecture 5: Local Search
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Informed Search Uninformed searches Informed searches easy
Department of Computer Science
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Artificial Intelligence (CS 370D)
Heuristics Local Search
Heuristics Local Search
Artificial Intelligence
Presentation transcript:

1 CS 2710, ISSP 2610 Chapter 4, Part 2 Heuristic Search

2 Beam Search Cheap, unpredictable search For problems with many solutions, it may be worthwhile to discard unpromising paths Greedy best first search that keeps a fixed number of nodes on the fringe

3 Beam Search def beamSearch(fringe,beamwidth): while len(fringe) > 0: cur = fringe[0] fringe = fringe[1:] if goalp(cur): return cur newnodes = makeNodes(cur, successors(cur)) fringe=sortByH(newnodes, fringe) fringe = fringe[:beamwidth] return []

4 Beam Search Optimal? Complete? Hardly! Space? O(b) (generates the successors) Often useful

5 Generating Heuristics Exact solutions to different (relaxed problems) H1 (# of misplaced tiles) is perfectly accurate if a tile could move to any square H2 (sum of Manhattan distances) is perfectly accurate if a tile could move 1 square in any direction

6 Relaxed Problems (cont) If problem is defined formally as a set of constraints, relaxed problems can be generated automatically A tile can move from square A to square B if: –A is adjacent to B, and –B is blank 3 relaxed problems by removing one or both constraints Absolver (Prieditis, 1993)

7 Combining Heuristics If you have lots of heuristics and none dominates the others and all are admissible… Use them all! H(n) = max(h1(n), …, hm(n))

8 Generating Heuristics Use the solution cost of a subproblem. E.g., get tiles 1 through 4 in the right location, ignoring the others. Pattern databases: store exact solution costs for every possible subproblem instance. –The heuristic function looks up the value in the DB –DB constructed by searching back from goal and recording cost of each new pattern Do the same for tiles 5,6,7,8 and take max

9 Other Sources of Heuristics Ad-hoc, informal, rules of thumb (guesswork) Approximate solutions to problems (algorithms course) Learn from experience (solving lots of 8-puzzles). –Each optimal solution is a learning example (node,actual cost to goal) –Learn heuristic function, E.G. H(n) = c1x1(n) + c2x2(n) x1 = #misplaced tiles; x2 = #adj tiles also adj in the goal state. c1 & c2 learned (best fit to the training data)

10 Remaining Search Types Recall we have… –Backtracking state-space search –Local Search and Optimization –Constraint satisfaction search

11 Local Search and Optimization Previous searches: keep paths in memory, and remember alternatives so search can backtrack. Solution is a path to a goal. Path may be irrelevant, if the final configuration only is needed (8- queens, IC design, network optimization, …)

12 Local Search Use a single current state and move only to neighbors. Use little space Can find reasonable solutions in large or infinite (continuous) state spaces for which the other algorithms are not suitable

13 Optimization Local search is often suitable for optimization problems. Search for best state by optimizing an objective function.

14 Visualization States are laid out in a landscape Height corresponds to the objective function value Move around the landscape to find the highest (or lowest) peak Only keep track of the current states and immediate neighbors

15 Local Search Alogorithms Two strategies for choosing the state to visit next –Hill climbing –Simulated annealing Then, an extension to multiple current states: –Genetic algorithms

16 Hillclimbing (Greedy Local Search) Generate nearby successor states to the current state based on some knowledge of the problem. Pick the best of the bunch and replace the current state with that one. Loop

17 Hill-climbing search problems Local maximum: a peak that is lower than the highest peak, so a bad solution is returned Plateau: the evaluation function is flat, resulting in a random walk Ridges: slopes very gently toward a peak, so the search may oscillate from side to side Local maximum Plateau Ridge

18 Random restart hill-climbing Start different hill-climbing searches from random starting positions stopping when a goal is found If all states have equal probability of being generated, it is complete with probability approaching 1 (a goal state will eventually be generated). Best if there are few local maxima and plateaux

19 Random restart hill-climbing Hmm… If all states have equal probability of being generated, it is complete with probability approaching 1 (a goal state will eventually be generated). If it is restarted enough times, we considered… Well, hill-climbing stops when no neighbors are better than current It stops on a local optimum (whether or not it is a global optimum). So, “more iterations” does not seem to be the point.

20 Random restart hill-climbing Hmm… If all states have equal probability of being generated, it is complete with probability approaching 1 (a goal state will eventually be generated). Any way to see this as true? R&N,p. 111: A complete local search algorithm always finds a goal if one exists; an optimal algorithm always finds a global minimum/maximum. So, “goal” means local minimum/maximum

21 Simulated Annealing Based on a metallurgical metaphor –Start with a temperature set very high and slowly reduce it. –Run hillclimbing with the twist that you can occasionally replace the current state with a worse state based on the current temperature and how much worse the new state is.

22 Simulated Annealing Annealing: harden metals and glass by heating them to a high temperature and then gradually cooling them At the start, make lots of moves and then gradually slow down

23 Simulated Annealing More formally… –Generate a random new neighbor from current state. –If it’s better take it. –If it’s worse then take it with some probability proportional to the temperature and the delta between the new and old states.

24 Simulated annealing Probability of a move decreases with the amount ΔE by which the evaluation is worsened A second parameter T is also used to determine the probability: high T allows more worse moves, T close to zero results in few or no bad moves Schedule input determines the value of T as a function of the completed cycles

25 function Simulated-Annealing(problem, schedule) returns a solution state inputs:problem, a problem schedule, a mapping from time to “temperature” local variables:current, a node next, a node T, a “temperature” controlling the probability of downward steps current ← Make-Node(Initial-State[problem]) for t ← 1 to ∞ do T ← schedule[t] if T=0 then return current next ← a randomly selected successor of current ΔE ← Value[next] – Value[current] if ΔE > 0 then current ← next else current ← next only with probability e ΔE/T

26 Local Beam Search Keep track of k states rather than just one, as in hill climbing In comparison to beam search we saw earlier, this alg is state-based rather than node-based.

27 Local Beam Search Begins with k randomly generated states At each step, all successors of all k states are generated If any one is a goal, alg halts Otherwise, selects best k successors from the complete list, and repeats

28 Local Beam Search Successors can become concentrated in a small part of state space Stochastic beam search: choose k successors, with probability of choosing a given successor increasing with value Like natural selection: successors (offspring) of a state (organism) populate the next generation according to its value (fitness)

29 Genetic Algorithms Variant of stochastic beam search Combine two parent states to generate successors (sexual versus asexual reproduction)

30 Fun GA (pop, fitness-fn) Repeat new-pop = {} for i from 1 to size(pop): x = rand-sel(pop,fitness-fn) y = rand-sel(pop,fitness-fn) child = reproduce(x,y) if (small rand prob): child  mutate(child) add child to new-pop pop = new-pop Until an indiv is fit enough, or out of time Return best indiv in pop, according to fitness-fn

31 Fun reproduce(x,y) n = len(x) c = random num from 1 to n return: append(substr(x,1,c),substr(y,c+1,n)

32 Example: n-queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal

33 Genetic Algorithms Notes Representation of individuals –Classic approach: individual is a string over a finite alphabet with each element in the string called a gene –Usually binary instead of AGTC as in real DNA Selection strategy –Random –Selection probability proportional to fitness –Selection is done with replacement to make a very fit individual reproduce several times Reproduction –Random pairing of selected individuals –Random selection of cross-over points –Each gene can be altered by a random mutation

34 Genetic Algorithms When to use them? Genetic algorithms are easy to apply Results can be good on some problems, but bad on other problems Always good to give a try before spending time on something more complicated