Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search and Optimization
Local Search Algorithms
Artificial Intelligence Presentation
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
More on A*; Preview of Logic; Local Search Henry Kautz.
Logical Foundations of AI SAT Henry Kautz. Resolution Refutation Proof DAG, where leaves are input clauses Internal nodes are resolvants Root is false.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) February, 9, 2009.
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) February, 6, 2009.
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
02 -1 Lecture 02 Heuristic Search Topics –Basics –Hill Climbing –Simulated Annealing –Best First Search –Genetic Algorithms –Game Search.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Informed Search Methods
Genetic Algorithm.
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
Local Search: walksat, ant colonies, and genetic algorithms.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
Local Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
Intro. ANN & Fuzzy Systems Lecture 37 Genetic and Random Search Algorithms (2)
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld…) 1.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
Heuristic Optimization Methods
Local Search Algorithms
Local Search Strategies: From N-Queens to Walksat
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Heuristics Local Search
Stochastic Local Search Variants Computer Science cpsc322, Lecture 16
Heuristics Local Search
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Local Search Algorithms
Local Search Algorithms
Presentation transcript:

Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz

Local Search in Continuous Spaces negative step to minimize f positive step to maximize f gradient

Local Search in Discrete State Spaces state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; Terminology: –“neighbors” instead of “children” –heuristic h(s) is the “objective function”, no need to be admissible No guarantee of finding a solution –sometimes: probabilistic guarantee Best goal-finding, not path-finding Many variations

Local Search versus Systematic Search Systematic Search –BFS, DFS, IDS, Best-First, A* –Keeps some history of visited nodes –Always complete for finite search spaces, some versions complete for infinite spaces –Good for building up solutions incrementally State = partial solution Action = extend partial solution

Local Search versus Systematic Search Local Search –Gradient descent, Greedy local search, Simulated Annealing, Genetic Algorithms –Does not keep history of visited nodes –Not complete. May be able to argue will terminate with “high probability” –Good for “fixing up” candidate solutions State = complete candidate solution that may not satisfy all constraints Action = make a small change in the candidate solution

N-Queens Problem

N-Queens Systematic Search state = choose_start_state(); add state to Fringe; while ! GoalTest(state) do choose state from Fringe according to h(state); Fringe = Fringe U { Children(state) } end return state; start = empty board GoalTest = N queens are on the board h = (N – number of queens on the board) children = all ways of adding one queen without creating any attacks

N-Queens Local Search, V1 state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; start = put down N queens randomly GoalTest = Board has no attacking pairs h = number of attacking pairs neighbors = move one queen to a different square on the board

N-Queens Local Search, V2 state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; start = put a queen on each square with 50% probability GoalTest = Board has N queens, no attacking pairs h = (number of attacking pairs + max(0, N - # queens)) neighbors = add or delete one queen

N Queens Demo

States Where Greedy Search Must Succeed objective function

States Where Greedy Search Might Succeed objective function

Local Search Landscape objective function Local Minimum Plateau

Variations of Greedy Search Where to start? –RANDOM STATE –PRETTY GOOD STATE What to do when a local minimum is reached? –STOP –KEEP GOING Which neighbor to move to? –BEST neighbor –Any BETTER neighbor (Hill Climbing) How to make local search more robust?

Restarts for run = 1 to max_runs do state = choose_start_state(); flip = 0; while ! GoalTest(state) && flip++ < max_flips do state := arg min { h(s) | s in Neighbors(state) } end if GoalTest(state) return state; end return FAIL

Uphill Moves: Random Noise state = choose_start_state(); while ! GoalTest(state) do with probability noise do state = random member Neighbors(state) else state := arg min { h(s) | s in Neighbors(state) } end return state;

Uphill Moves: Simulated Annealing (Constant Temperature) state = start; while ! GoalTest(state) do next = random member Neighbors(state); deltaE = h(next) – h(state); if deltaE  0 then state := next; else with probability e -deltaE/temperature do state := next; end endif end return state; Book reverses, because is looking for max h state

Uphill Moves: Simulated Annealing (Geometric Cooling Schedule) temperature := start_temperature; state = choose_start_state(); while ! GoalTest(state) do next = random member Neighbors(state); deltaE = h(next) – h(state); if deltaE  0 then state := next; else with probability e -deltaE/temperature do state := next; end temperature := cooling_rate * temperature; end return state;

Simulated Annealing For any finite problem with a fully-connected state space, will provably converge to optimum as length of schedule increases: But: fomal bound requires exponential search time In many practical applications, can solve problems with a faster, non-guaranteed schedule

Other Local Search Strategies Tabu Search –Keep a history of the last K visited states –Revisiting a state on the history list is “tabu” Genetic algorithms –Population = set of K multiple search points –Neighborhood = population U mutations U crossovers Mutation = random change in a state Crossovers = random mix of assignments from two states Typically only a portion of neighbor is generated –Search step: new population = K best members of neighborhood