More on Search: A* and Optimization

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Iterative Improvement Algorithms
Informed Search CSE 473 University of Washington.
Trading optimality for speed…
Iterative Improvement Algorithms For some problems, path to solution is irrelevant: just want solution Start with initial state, and change it iteratively.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Informed Search Chapter 4 Adapted from materials by Tim Finin, Marie desJardins, and Charles R. Dyer CS 63.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 5 Jim Martin.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Heuristic Search Heuristic - a “rule of thumb” used to help guide search often, something learned experientially and recalled when needed Heuristic Function.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Informed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #3.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Simulated Annealing. Difficulty in Searching Global Optima starting point descend direction local minima global minima barrier to local search.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Chapter 5. Advanced Search Fall 2011 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Optimization Problems
Optimization via Search
Informed Search Chapter 4 (b)
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
Informed Search Chapter 4 (b)
Heuristic Optimization Methods
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Artificial Intelligence (CS 370D)
Games with Chance Other Search Algorithms
Games with Chance Other Search Algorithms
CS621: Artificial Intelligence
Randomized Hill Climbing
Heuristic search INT 404.
Optimization Problems
CSE 589 Applied Algorithms Spring 1999
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Heuristics Local Search
Informed Search Chapter 4 (b)
Informed search algorithms
Heuristics Local Search
Boltzmann Machine (BM) (§6.4)
Lecture 9 Administration Heuristic search, continued
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
Games with Chance Other Search Algorithms
Local Search Algorithms
Presentation transcript:

More on Search: A* and Optimization CPSC 315 – Programming Studio Spring 2013 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe

A* Algorithm Avoid expanding paths that are already expensive. f(n) = g(n) + h(n) g(n) = current path cost from start to node n h(n) = estimate of remaining distance to goal h(n) should never overestimate the actual cost of the best solution through that node. The better h is, the better the algorithm works Should be fast to compute

A* Then apply a best-first search Add node with lowest overall estimate f Update the states reachable from that node, if this offers a better path If g(n) is lower than path previously found Value of f will only increase as paths evaluate

5 3 7 1 7 4 4 2 5 6 10 2

g=5 h=9 f=14 5 3 7 1 7 g=4 h=7 f=11 4 4 2 5 6 10 2 g=2 h=14 f=16

g=5 h=9 f=14 5 3 7 1 7 g=4 h=7 f=11 4 4 2 5 6 10 2 g=2 h=14 f=16 g=14 h=1 f=15

g=5 h=9 f=14 g=8 h=4 f=12 5 3 7 1 7 g=4 h=7 f=11 g=12 h=4 f=16 4 4 2 5 6 10 2 g=2 h=14 f=16 g=14 h=1 f=15

g=5 h=9 f=14 g=8 h=4 f=12 g=15 h=2 f=18 5 3 7 1 7 g=4 h=7 f=11 g=9 h=4 f=13 4 4 2 5 6 10 2 g=2 h=14 f=16 g=14 h=1 f=15

Improving Results and Optimization Assume a state with many variables Assume some function that you want to maximize/minimize the value of Searching entire space is too complicated Can’t evaluate every possible combination of variables Function might be difficult to evaluate analytically

Iterative improvement Start with a complete valid state Gradually work to improve to better and better states Sometimes, try to achieve an optimum, though not always possible Sometimes states are discrete, sometimes continuous

Simple Example One dimension (typically use more): function value x

Simple Example Start at a valid state, try to maximize function value

Simple Example Move to better state function value x

Simple Example Try to find maximum function value x

Hill-Climbing Choose Random Starting State Repeat From current state, generate n random steps in random directions Choose the one that gives the best new value While some new better state found (i.e. exit if none of the n steps were better)

Simple Example Random Starting Point function value x

Simple Example Three random steps function value x

Simple Example Choose Best One for new position function value x

Simple Example Repeat function value x

Simple Example Repeat function value x

Simple Example Repeat function value x

Simple Example Repeat function value x

Simple Example No Improvement, so stop. function value x

Problems With Hill Climbing Random Steps are Wasteful Addressed by other methods Local maxima, plateaus, ridges Can try random restart locations Can keep the n best choices (this is also called “beam search”) Comparing to game trees: Basically looks at some number of available next moves and chooses the one that looks the best at the moment Beam search: follow only the best-looking n moves

Gradient Descent (or Ascent) Simple modification to Hill Climbing Generallly assumes a continuous state space Idea is to take more intelligent steps Look at local gradient: the direction of largest change Take step in that direction Step size should be proportional to gradient Tends to yield much faster convergence to maximum

Gradient Ascent Random Starting Point function value x

Gradient Ascent Take step in direction of largest increase (obvious in 1D, must be computed in higher dimensions) function value x

Gradient Ascent Repeat function value x

Gradient Ascent Next step is actually lower, so stop function value x

Gradient Ascent Could reduce step size to “hone in” function value x

Gradient Ascent Converge to (local) maximum function value x

Dealing with Local Minima Can use various modifications of hill climbing and gradient descent Random starting positions – choose one Random steps when maximum reached Conjugate Gradient Descent/Ascent Choose gradient direction – look for max in that direction Then from that point go in a different direction Simulated Annealing

Simulated Annealing Annealing: heat up metal and let cool to make harder By heating, you give atoms freedom to move around Cooling “hardens” the metal in a stronger state Idea is like hill-climbing, but you can take steps down as well as up. The probability of allowing “down” steps goes down with time

Simulated Annealing Heuristic/goal/fitness function E (energy) Generate a move (randomly) and compute DE = Enew-Eold If DE <= 0, then accept the move If DE > 0, accept the move with probability: Set T is “Temperature”

Simulated Annealing Compare P(DE) with a random number from 0 to 1. If it’s below, then accept Temperature decreased over time When T is higher, downward moves are more likely accepted T=0 means equivalent to hill climbing When DE is smaller, downward moves are more likely accepted

“Cooling Schedule” Speed at which temperature is reduced has an effect Too fast and the optima are not found Too slow and time is wasted

Simulated Annealing Random Starting Point T = Very High function value x

Simulated Annealing T = Very High Random Step function value x

Simulated Annealing Even though E is lower, accept T = Very High function value x

Simulated Annealing Next Step; accept since higher E T = Very High function value x

Simulated Annealing Next Step; accept since higher E T = Very High function value x

Simulated Annealing Next Step; accept even though lower T = High function value x

Simulated Annealing Next Step; accept even though lower T = High function value x

Simulated Annealing Next Step; accept since higher T = Medium function value x

Simulated Annealing Next Step; lower, but reject (T is falling) T = Medium Next Step; lower, but reject (T is falling) function value x

Simulated Annealing Next Step; Accept since E is higher T = Medium function value x

Simulated Annealing Next Step; Accept since E change small T = Low function value x

Simulated Annealing Next Step; Accept since E larger T = Low function value x

Simulated Annealing Next Step; Reject since E lower and T low T = Low function value x

Simulated Annealing Eventually converge to Maximum T = Low function value x

Other Optimization Approach: Genetic Algorithms State = “Chromosome” Genes are the variables Optimization Function = “Fitness” Create “Generations” of solutions A set of several valid solutions Most fit solutions carry on Generate next generation by: Mutating genes of previous generation “Breeding” – Pick two (or more) “parents” and create children by combining their genes

More on Optimization Lots of other variants/approaches Range from heuristic methods to formal methods A critical problem, constantly being studied