Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search Algorithms
Local Search Algorithms Chapter 4. Outline Hill-climbing search Simulated annealing search Local beam search Genetic algorithms Ant Colony Optimization.
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
For Friday Finish chapter 5 Program 1, Milestone 1 due.
Problem Solving by Searching
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CS 4700: Foundations of Artificial Intelligence
Trading optimality for speed…
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Beyond Classical Search (Local Search) R&N III: Chapter 4
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Constraint Satisfaction Problems
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Artificial Intelligence for Games Online and local search
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Chapter 4 Informed Search and Exploration. Outline Informed (Heuristic) search strategies  (Greedy) Best-first search  A* search (Admissible) Heuristic.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Department of Computer Science Lecture 5: Local Search
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld…) 1.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
Department of Computer Science
Local Search Algorithms
Artificial Intelligence (CS 370D)
Local Search and Optimization
Informed search algorithms
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
CSC 380: Design and Analysis of Algorithms
Local Search Algorithms
Presentation transcript:

Local Search Algorithms CMPT 463

When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and Java IDEs: Python IDLE, Visual Studio, Eclipse, NetBeans Event Schedule 3:30 – 5:30 pm – Contest 5:30 pm – Award Ceremony 5:30 pm – Pizza Party Register your team online at or in RLC Contact Dr. Tina Tian for questions.

Outline Introduction of local search Hill climbing search Simulated annealing Local beam search 3

Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. o e.g., n-queens. o E.g., Integrated-circuit design o Job scheduling o Telecommunication network optimization o … 4

8-Queens Problem Put 8 queens on an 8 × 8 board with no two queens attacking each other. No two queens share the same row, column, or diagonal. 5

8-Queens Problem Incremental formulation Complete-state formulation 6

Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution. o e.g., n-queens. We can use local search algorithms: keep a single "current" state, try to improve it o generally move to neighbors o The paths are not retained 7

Advantages to local search Use very little memory Can often find reasonable solutions in large state spaces Useful for solving pure optimization problems o maximize goodness measure o Many do not fit in “standard” model: Darwinian evolution (Goal test? Path cost?) Local search algorithms can’t backtrack

Example: n-queens Move a queen to reduce number of conflicts 9

Hill-climbing search (steepest-ascent version) A simple loop that continuously moves in the direction of increasing value – uphill Terminates when reaches a “peak” does not look ahead beyond the immediate neighbors, does not maintain a search tree

8-queens problem How many successors we can derive from one state? Each state has 8*7 = 56 successors. complete-state formulation vs. incremental formulation

8-queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly (h=0 solution) h = 17 for the above state 12

Hill-climbing search “Greedy local search” o grabs a good neighbor state without thinking ahead about where to go next makes rapid progress

Hill climbing search: 8-queens problem Only 5 steps from h = 17 to h = 1 14

15 What we learn hill-climbing is Usually like What we think hill-climbing looks like

Hill-climbing search Problem: depending on initial state, can get stuck in local maxima.

Problems for hill climbing A local maxima with h = 1 17

Problems for hill climbing Plateaux: a flat area of the state-space landscape 18

Hill climbing search  Starting from a randomly generated 8- queen state, steepest-ascent hill climbing gets stuck 86% of the time. It takes 4 steps on average when it succeeds and 3 when it gets stuck. The steepest ascent version halts if the best successor has the same value as the current. 19

Some solutions allow a sideways move o shoulder o  flat local maximum, that is not a shoulder 20

Some solutions Solution: a limit on the number of consecutive sideway moves o E.g., 100 consecutive sideways moves in the 8- queens problem o successful rate: raises from14% to 94% o  cost: 21 steps on average for each successful instance, 64 for each failure 21

Some more solutions (Variants of hill climbing ) Stochastic hill climbing o chooses at random from among the uphill moves o converge more slowly, but finds better solutions First-choice hill climbing o generates successors randomly until one is better than the current state o good when with many (thousands) of successors

Some more solutions (Variants of hill climbing ) Random-restart hill climbing o “If you don’t succeed, try, try again.” o Keep restarting from randomly generated initial states, stopping when goal is found

Simulated Annealing A hill-climbing algorithm that never makes “downhill” moves is guaranteed to be incomplete. Idea: escape local maxima by allowing some “bad” moves

Simulated Annealing Picks a random move (instead of the best) If “good move” accepted; else accepted with some probability The probability decreases exponentially with the “badness” of the move It also decreases as temperature “T” goes down

Simulated Annealing Simulated annealing was first used extensively to solve VLSI (Very-Large-Scale Integration) layout problems. It has been applied widely to factory scheduling and other large-scale optimization tasks.

Local Beam Search Idea: keep k states instead of 1; choose top k of all their successors Not the same as k searches run in parallel! Searches that find good states recruit other searches to join them o moves the resources to where the most progress is being made

Local Beam Search Problem: quite often, all k states end up on same local hill (concentrated in a small region) Idea: choose k successors randomly ( stochastic beam search )

Genetic Algorithms (GA) A successor state is generated by combining two parent states

Genetic Algorithms Start with k randomly generated states (population) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation

Genetic algorithms Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) 24/( ) = 31% 23/( ) = 29% etc

Crossover can produce a state that is a long way from either parent state.