Local Search Algorithms

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Local Search Algorithms
Local Search Algorithms Chapter 4. Outline Hill-climbing search Simulated annealing search Local beam search Genetic algorithms Ant Colony Optimization.
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
Problem Solving by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Trading optimality for speed…
Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Machine Learning Lecture 4: Greedy Local Search (Hill Climbing)
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Beyond Classical Search (Local Search) R&N III: Chapter 4
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Constraint Satisfaction Problems
Informed search algorithms
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Department of Computer Science Lecture 5: Local Search
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld…) 1.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Department of Computer Science
Local Search Algorithms
Artificial Intelligence (CS 370D)
Local Search and Optimization
Informed search algorithms
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
CSC 380: Design and Analysis of Algorithms
Local Search Algorithms
Presentation transcript:

Local Search Algorithms CPS 4801

Outline Hill-Climbing Search Simulated Annealing Local Beam Search (briefly)

Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. Find the final configuration satisfying constraints, e.g., n-queens. In such cases, we can use local search algorithms: keep a single "current" state, try to improve it generally move to neighbors The path are not retained

Local search algorithms uses very little memory useful for solving pure optimization problems can often find reasonable solutions in large state spaces.

Example: n-queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal

Hill-climbing search (steepest-ascent version) A simple loop that continuously moves in the direction of increasing value – uphill Terminates when reaches a “peak” does not look ahead beyond the immediate neighbors, does not maintain a search tree

8-queens problem Each state has 8*7 = 56 successors. complete-state formulation vs. incremental formulation

8-queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly (h=0 solution) h = 17 for the above state

Hill-climbing search “Greedy local search” makes rapid progress grabs a good neighbor state without thinking ahead about where to go next makes rapid progress

Hill-climbing search: 8-queens problem 5 steps from the state in the previous slide A local minimum with h = 1

Hill-climbing search Problem: depending on initial state, can get stuck in local maxima.

Hill-climbing search  Starting from a randomly generated 8-queen state, steepest-ascent hill climbing gets stuck 86% of the time.  It takes 4 steps on average when it succeeds and 3 when it gets stuck. The steepest ascent version halts if the best successor has the same value as the current.

Hill-climbing search allow a sideways move shoulder flat local maximum, that is not a shoulder Solution: a limit on the number of consecutive sideway moves E.g., 100 consecutive sideways movies in the 8-queens problem successful rate: raises from14% to 94% cost: 21 steps on average for each successful instance, 64 for each failure

Variants of hill climbing Stochastic hill climbing chooses at random from among the uphill moves converge more slowly, but finds better solutions First-choice hill climbing generates successors randomly until one is better than the current state good when with many (thousands) of successors

Variants of hill climbing Random-restart hill climbing “If you don’t succeed, try, try again.” conducts a series of hill-climbing searches from randomly generated initial states, until a goal is found.

Simulated Annealing A hill-climbing algorithm that never makes “downhill” moves is guaranteed to be incomplete. Idea: escape local maxima by allowing some “bad” moves

Simulated Annealing Picks a random move (instead of the best) If “good move” accepted; else accepted with some probability The probability decreases exponentially with the “badness” of the move

Simulated Annealing

Simulated Annealing Simulated annealing was first used extensively to solve VLSI (Very-Large-Scale Integration) layout problems. It has been applied widely to factory scheduling and other large-scale optimization tasks.

Local Beam Search Idea: keep k states instead of 1; choose top k of all their successors Not the same as k searches run in parallel! Searches that find good states recruit other searches to join them moves the resources to where the most progress is being made

Local Beam Search Problem: quite often, all k states end up on same local hill (concentrated in a small region) Idea: choose k successors randomly (stochastic beam search)