More on HW 2 (due Jan 26) Again, it must be in Python 2.7.

Slides:



Advertisements
Similar presentations
Local Search and Optimization
Advertisements

Local Search Algorithms
Heuristic Search Chapter 3.
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Local search algorithms
Local search algorithms
Two types of search problems
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Beyond Classical Search
Simulated Annealing 10/7/2005.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Local Search and Optimization
1 IE 607 Heuristic Optimization Simulated Annealing.
Informed Search Chapter 4 (b)
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
1 Beyond Classical Search Chapter 3 covered problems that considered the whole search space and produced a sequence of actions leading to a goal. Chapter.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
B EYOND C LASSICAL S EARCH Instructor: Kris Hauser 1.
Simulated Annealing.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Local Search and Optimization Presented by Collin Kanaley.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Optimization Problems
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Informed Search Chapter 4 (b)
Optimization Problems
Optimization via Search
Informed Search Chapter 4 (b)
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
Informed Search Chapter 4 (b)
Van Laarhoven, Aarts Version 1, October 2000
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Heuristic search INT 404.
Local Search and Optimization
Optimization Problems
CSE 4705 Artificial Intelligence
Heuristics Local Search
Informed Search Chapter 4 (b)
Beyond Classical Search
Informed search algorithms
More on Search: A* and Optimization
Heuristics Local Search
Chapter 5. Advanced Search
Artificial Intelligence
Lecture 9 Administration Heuristic search, continued
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
Local Search Algorithms
Presentation transcript:

More on HW 2 (due Jan 26) Again, it must be in Python 2.7. For the A* algorithm, you will need an Open* list and a Closed list. States should have the coordinates of the point the g-value cost of the path from init to here the h-value estimate of cost to goal the parent state (optional) list of successors *Note: Python will have a fit if you call it Open.

Input format Example 0 0 9 6 2 0 0 4 0 4 4 0 4 7 4 9 6 4 10 2 8 Start Point Goal Point How many Rectangles. Rectangle coordinates are given clockwise.

More Difficult Example Has 4 known solutions with approximately the same cost. You will just find 1 least cost solution and print the whole path with states and cumulative costs.

In Addition You must design your own custom data set and run your program on it, too. You will turn in a picture of your data set, similar to the pictures we give you of ours. Turn in commented source code, input and output from all 3 data sets, and your picture.

Beyond Classical Search Chapter 3 covered problems that considered the whole search space and produced a sequence of actions leading to a goal. Chapter 4 covers techniques (some developed outside of AI) that don’t try to cover the whole space and only the goal state, not the steps, are important. The techniques of Chapter 4 tend to use much less memory and are not guaranteed to find an optimal solution.

More Search Methods Local Search Local Search in Continuous Spaces Hill Climbing Simulated Annealing Beam Search Genetic Search Local Search in Continuous Spaces Searching with Nondeterministic Actions Online Search (agent is executing actions)

Local Search Algorithms and Optimization Problems Complete state formulation For example, for the 8 queens problem, all 8 queens are on the board and need to be moved around to get to a goal state Equivalent to optimization problems often found in science and engineering Start somewhere and try to get to the solution from there Local search around the current state to decide where to go next

Pose Estimation Example Given a geometric model of a 3D object and a 2D image of the object. Determine the position and orientation of the object wrt the camera that snapped the image. image 3D object State (x, y, z, θx, θy, θz)

Gradient What is the gradient of a function? In 1D. Function f(x). Gradient f΄(x), the derivative. In 2D. Function f(x,y). Gradient ( x, y) e.g. f(x) = x2. f΄(x) = ?

Hill Climbing “Gradient ascent” solution Note: solutions shown here as max not min. Often used for numerical optimization problems. How does it work? In continuous space, the gradient tells you the direction in which to move uphill.

Numeric Example -1 0 1 Normal distribution with 0 mean and 1 SD -1 0 1 Normal distribution with 0 mean and 1 SD f(x) = c e ^ (-1/2)x2 f΄(x) = -x c e ^ (-1/2)x2 f΄(1) comes out negative, ie. move backward f΄(-1) comes out positive, ie. move forward.

AI Hill Climbing Steepest-Ascent Hill Climbing current  start node loop do neighbor  a highest-valued successor of current if neighbor.Value <= current.Value then return current.State current  neighbor end loop At each step, the current node is replaced by the best (highest-valued) neighbor. This is sometimes called greedy local search.

Hill Climbing Search current 6 4 10 3 2 8 What if current had a value of 12?

Hill Climbing Problems Local maxima Plateaus Diagonal ridges What is it sensitive to? Does it have any advantages?

Solving the Problems Allow backtracking (What happens to complexity?) Stochastic hill climbing: choose at random from uphill moves, using steepness for a probability Random restarts: “If at first you don’t succeed, try, try again.” Several moves in each of several directions, then test Jump to a different part of the search space

Simulated Annealing Variant of hill climbing (so up is good) Tries to explore enough of the search space early on, so that the final solution is less sensitive to the start state May make some downhill moves before finding a good way to move uphill.

Simulated Annealing heat cool Comes from the physical process of annealing in which substances are raised to high energy levels (melted) and then cooled to solid state. The probability of moving to a higher energy state, instead of lower is p = e^(-E/kT) where E is the positive change in energy level, T is the temperature, and k is Bolzmann’s constant. heat cool

Simulated Annealing At the beginning, the temperature is high. As the temperature becomes lower kT becomes lower E/kT gets bigger (-E/kT) gets smaller e^(-E/kT) gets smaller As the process continues, the probability of a downhill move gets smaller and smaller.

For Simulated Annealing E represents the change in the value of the objective function. Since the physical relationships no longer apply, drop k. So p = e^(-E/T) We need an annealing schedule, which is a sequence of values of T: T0, T1, T2, ...

Simulated Annealing Algorithm current  start node; for each T on the schedule /* need a schedule */ next  randomly selected successor of current evaluate next; it it’s a goal, return it E  next.Value – current.Value /* already negated */ if E > 0 then current  next /* better than current */ else current  next with probability e^(E/T) How would you do this probabilistic selection?

Probabilistic Selection Select next with probability p Generate a random number If it’s <= p, select next p 0 1 random number

Simulated Annealing Properties At a fixed “temperature” T, state occupation probability reaches the Boltzman distribution If T is decreased slowly enough (very slowly), the procedure will reach the best state. Slowly enough has proven too slow for some researchers who have developed alternate schedules. p(x) = e^(E(x)/kT)

Simulated Annealing Schedules Acceptance criterion and cooling schedule

Simulated Annealing Applications Basic Problems Traveling salesman Graph partitioning Matching problems Graph coloring Scheduling Engineering VLSI design Placement Routing Array logic minimization Layout Facilities layout Image processing Code design in information theory

Local Beam Search Keeps more previous states in memory Simulated annealing just kept one previous state in memory. This search keeps k states in memory. - randomly generate k initial states - if any state is a goal, terminate - else, generate all successors and select best k - repeat

Local Beam Search

Coming next: Genetic Algorithms, which are motivated by human genetics. How do you search a very large search space in a fitness oriented way?