Local search algorithms

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search and Optimization
Local Search Algorithms
Local Search Algorithms Chapter 4. Outline Hill-climbing search Simulated annealing search Local beam search Genetic algorithms Ant Colony Optimization.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Two types of search problems
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
1 Chapter 5 Advanced Search. 2 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CS 4700: Foundations of Artificial Intelligence
Trading optimality for speed…
Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Machine Learning Lecture 4: Greedy Local Search (Hill Climbing)
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Beyond Classical Search (Local Search) R&N III: Chapter 4
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 5 Jim Martin.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
1 CS 2710, ISSP 2610 R&N Chapter 4.1 Local Search and Optimization.
Constraint Satisfaction Problems
Informed search algorithms
Local Search and Optimization
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
When A* fails – Hill climbing, simulated annealing Genetic algorithms
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
1 CS 2710, ISSP 2610 Chapter 4, Part 2 Heuristic Search.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Artificial Intelligence for Games Online and local search
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld…) 1.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Local Search Algorithms
Artificial Intelligence (CS 370D)
Local Search and Optimization
School of Computer Science & Engineering
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
CSC 380: Design and Analysis of Algorithms
Beyond Classical Search
Local Search Algorithms
Presentation transcript:

Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that tells us how “good” a given state is, and we want to find the solution (goal) by minimizing or maximizing the value of this function

Example: n-queens problem Put n queens on an n × n board with no two queens on the same row, column, or diagonal State space: all possible n-queen configurations What’s the objective function? Number of pairwise conflicts

Example: Traveling salesman problem Find the shortest tour connecting a given set of cities State space: all possible tours Objective function: length of tour

Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that tells us how “good” a given state is, and we want to find the solution (goal) by minimizing or maximizing the value of this function The start state may not be specified The path to the goal doesn’t matter In such cases, we can use local search algorithms that keep a single “current” state and gradually try to improve it

Example: n-queens problem State space: all possible n-queen configurations Objective function: number of pairwise conflicts What’s a possible local improvement strategy? Move one queen within its column to reduce conflicts

Example: n-queens problem State space: all possible n-queen configurations Objective function: number of pairwise conflicts What’s a possible local improvement strategy? Move one queen within its column to reduce conflicts h = 17

Example: Traveling Salesman Problem Find the shortest tour connecting n cities State space: all possible tours Objective function: length of tour What’s a possible local improvement strategy? Start with any complete tour, perform pairwise exchanges

Hill-climbing search Initialize current to starting state Loop: Let next = highest-valued successor of current If value(next) < value(current) return current Else let current = next Variants: choose first better successor, randomly choose among better successors “Like climbing mount Everest in thick fog with amnesia”

Hill-climbing search Is it complete/optimal? No – can get stuck in local optima Example: local optimum for the 8-queens problem h = 1

The state space “landscape” How to escape local maxima? Random restart hill-climbing What about “shoulders”? What about “plateaux”?

Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency Probability of taking downhill move decreases with number of iterations, steepness of downhill move Controlled by annealing schedule Inspired by tempering of glass, metal

Simulated annealing search Initialize current to starting state For i = 1 to  If T(i) = 0 return current Let next = random successor of current Let  = value(next) – value(current) If  > 0 then let current = next Else let current = next with probability exp(/T(i))

Effect of temperature exp(/T) 

Simulated annealing search One can prove: If temperature decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching one However: This usually takes impractically long The more downhill steps you need to escape a local optimum, the less likely you are to make all of them in a row More modern techniques: general family of Markov Chain Monte Carlo (MCMC) algorithms for exploring complicated state spaces

Local beam search Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat Is this the same as running k greedy searches in parallel? Greedy search Beam search