Randomized Hill Climbing

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Elements of the Heuristic Approach
D Goforth - COSC 4117, fall Note to 4 th year students  students interested in doing masters degree and those who intend to apply for OGS/NSERC.
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Search CSE When you can’t use A* Hill-climbing Simulated Annealing Other strategies 2 person- games.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
Eick: Reinforcement Learning. Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning.
Probabilistic Algorithms Evolutionary Algorithms Simulated Annealing.
Heuristic Methods for the Single- Machine Problem Chapter 4 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha R2.
Ch. Eick: Introduction to Search Classification of Search Problems Search Uninformed Search Heuristic Search State Space SearchConstraint Satisfaction.
Local Search Algorithms and Optimization Problems
CS-424 Gregory Dudek Lecture 10 Annealing (final comments) Adversary Search Genetic Algorithms (genetic search)
Ch. Eick: Randomized Hill Climbing Techniques Randomized Hill Climbing Neighborhood Hill Climbing: Sample p points randomly in the neighborhood of the.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
1. Randomized Hill Climbing Neighborhood Randomized Hill Climbing: Sample p points randomly in the neighborhood of the currently best solution; determine.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
Optimization Problems
Optimization via Search
School of Computer Science & Engineering
CSCI 4310 Lecture 10: Local Search Algorithms
Informed Search Chapter 4 (b)
Heuristic Optimization Methods
Van Laarhoven, Aarts Version 1, October 2000
School of Computer Science & Engineering
Local Search Algorithms
Local Search Strategies: From N-Queens to Walksat
Informed Search and Exploration
Computer Science cpsc322, Lecture 14
Artificial Intelligence (CS 370D)
Prof. Marie desJardins September 20, 2010
Teaching Plan Problem Solving
1. Randomized Hill Climbing
Randomized Hill Climbing
Heuristic search INT 404.
Computing the Entropy (H-Function)
Optimization Problems
Heuristics Local Search
Informed Search Chapter 4 (b)
School of Computer Science & Engineering
Introduction to Simulated Annealing
Lecture 9: Tabu Search © J. Christopher Beck 2005.
More on Search: A* and Optimization
Heuristics Local Search
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
Search.
Search.
CSC 380: Design and Analysis of Algorithms
Area Coverage Problem Optimization by (local) Search
Teaching Plan Problem Solving
Presentation transcript:

Randomized Hill Climbing Neighborhood The last technology I like to introduce in today’s presentation are shared ontologies. Shared ontologies are important to standardize communication, and for gathering information from different information sources. Ontologies play an important role for agent-based systems. Ontologies basically describe... Hill Climbing: Sample p points randomly in the neighborhood of the currently best solution; determine the best solution of the n sampled points. If it is better than the current solution, make it the new current solution and continue the search; otherwise, terminate returning the current solution. Advantages: easy to apply, does not need many resources, usually fast. Problems: How do I define my neighborhood; what parameter p should I choose?

Example Randomized Hill Climbing Maximize f(x,y,z)=|x-y-0.2|*|x*z-0.8|*|0.3-z*z*y| with x,y,z in [0,1] Neighborhood Design: Create solutions 50 solutions s, such that: s= (min(1, max(0,x+r1)), min(1, max(0,y+r2)), min(1, max(0, z+r3)) with r1, r2, r3 being random numbers in [-0.05,+0.05].

Problems Hill Climbing Terminates at a local optimum (moreover, the deviation between local and global optimum is usually unknown) Has problems with plateau (terminates), especially if the size of the plateau is larger than the neighborhood size. Has problems with ridges (usually falls of the “golden” path) The obtained solution strongly depends on the initial configuration. Too large neighborhood sizes random search, might shoot over hills. Too small neighborhood sizes slow convergence, might get stuck on small hills. Too large parameter p slow search; too small parameter p terminates without getting really close to the mountain top

Hill Climbing Variations Execute algorithm for a number of initial configurations (randomized hill climbing with restart) Use information of the previous runs to improve the choice of initial configurations. Dynamically adjust the size of the neighborhood and the number of points sampled. For example, start with large size neighborhoods and decrease the size of the neighborhood as the search evolves. Allow downward moves: Simulated Annealing Resample before terminating (e.g. sample p points; if there is no improvement sample another 2p points; if there is still no improvement sample another 4p points; if there is no improvement after that finally terminate). Use domain specific knowledge to determine neighborhood sizes and number of points sampled.

Hill Climbing for State Space Search Define a neighborhood as the set of states that can be reached by n operator applications from the current state (where n is a constant to be chosen based on the characteristics of a particular search problem) The state space version creates all states in the neighborhood of the current state (alternatively, it could just create some states which would be a randomized version), and picks the one with the best evaluation as the new current state, or it terminates unsuccessfully if there is no state that is better than the current state. A variable path has to be added to the hill climbing code that memorizes the path from the initial state to the current state. The path variable is initialized with an empty list. Every time a new current state is obtained the operator or operator sequence that was used to reach this state is appended to the path variable. A goal test has to be added to the hill climbing code (if it returns true the algorithm terminates returning the contents of its path variable as its solution). I

Backtracking X X Popular for state space search problems Idea (make the initial state the “current state”; the proceed as outlined below): Apply an (the best) operator that has not been applied before to the current state. The so obtained state becomes the new current state (if it is a goal state the algorithm terminates and returns a solution) If there is no such operator, backtrack: the predecessor of the current state becomes the new current state (if you applied all operators to the initial state the algorithm terminates without a solution). X Direction I came from X Already explored

Backtracking Pseudo Code By Matuszek https://www.cis.upenn.edu/~matuszek/cit594-2002/Pages/backtracking.html Wikipedia: https://en.wikipedia.org/wiki/Backtracking