Todd W. Neller Gettysburg College.  Specification: grid size, start state (square), goal state, jump numbers for each non- goal state.  Jump number:

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
An Introduction to Artificial Intelligence
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Iterative Deepening A* & Constraint Satisfaction Problems Lecture Module 6.
BackTracking Algorithms
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Todd Neller 1 Adrian Fisher 2 Munyaradzi Choga 1 Samir Lalvani 1 Kyle McCarty 1 1 Gettysburg College 2 Adrian Fisher Design Ltd.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
CSE 380 – Computer Game Programming Pathfinding AI
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) February, 6, 2009.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Teaching Stochastic Local Search Todd W. Neller. FLAIRS-2005 Motivation Intro AI course has many topics, little time Best learning is experiential, but.
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
CSC344: AI for Games Lecture 4: Informed search
CS 326A: Motion Planning Basic Motion Planning for a Point Robot.
Chapter 5: Path Planning Hadi Moradi. Motivation Need to choose a path for the end effector that avoids collisions and singularities Collisions are easy.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Solving problems by searching
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Brute Force Search Depth-first or Breadth-first search
Heuristic Search Heuristic - a “rule of thumb” used to help guide search often, something learned experientially and recalled when needed Heuristic Function.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
World space = physical space, contains robots and obstacles Configuration = set of independent parameters that characterizes the position of every point.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Today’s Topics FREE Code that will Write Your PhD Thesis, a Best-Selling Novel, or Your Next Methods for Intelligently/Efficiently Searching a Space.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
For Monday Read chapter 4, section 1 No homework..
Lecture 3: Uninformed Search
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Chapter 10 Minimization or Maximization of Functions.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
The A* and applications to Games Sources: My own Joseph Siefers presentation 2008.
Pedagogical Possibilities for the 2048 Puzzle Game Todd W. Neller.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
For Monday Read chapter 4 exercise 1 No homework.
Informed Search Methods
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Constraint Satisfaction Problems vs. Finite State Problems
Artificial Intelligence (CS 370D)
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Heuristics Local Search
Sudoku.
Lecture 9 Administration Heuristic search, continued
First Exam 18/10/2010.
Search.
Search.
Midterm Review.
Area Coverage Problem Optimization by (local) Search
Presentation transcript:

Todd W. Neller Gettysburg College

 Specification: grid size, start state (square), goal state, jump numbers for each non- goal state.  Jump number: Move exactly that many squares up, down, left, right. (Not diagonally.)  Objectives: ◦ Find a path from start to goal. ◦ Find the shortest of these paths.

  Preliminaries: ◦ Maze Representation Maze Representation  Uninformed Search: ◦ Maze Evaluation Maze Evaluation ◦ Maze Evaluation II Maze Evaluation II  Stochastic Local Search: ◦ Hill Descent Hill Descent ◦ Hill Descent with Random Restarts Hill Descent with Random Restarts ◦ Hill Descent with Random Uphill Steps Hill Descent with Random Uphill Steps ◦ Simulated Annealing Simulated Annealing  Machine Learning ◦ Restart Bandit Restart Bandit ◦ Restart SARSA Restart SARSA

 Maze Representation Maze Representation  Problem: Generate and print a random n-by-n Rook Jumping Maze (5 ≤ n ≤ 10) where there is a legal move (jump) from each non-goal state.

 Maze Evaluation Maze Evaluation ◦ Problem: [Maze Representation step] Then, for each cell, compute and print the minimum number of moves needed to reach that cell from the start cell, or "--" if no path exists from the start cell, i.e. the cell is unreachable. ◦ Breadth-first search ◦ Print objective function: negative goal distance, or a large positive number if goal is unreachable.

 Maze design as search ◦ Search space of possible maze designs for one that minimizes objective function.  Stochastic Local Search (SLS): ◦ Hill Descent Hill Descent ◦ Hill Descent with Random Restarts Hill Descent with Random Restarts ◦ Hill Descent with Random Uphill Steps Hill Descent with Random Uphill Steps ◦ Simulated Annealing Simulated Annealing  Additional resources for teaching SLS:

 “I really hate this damned machine; I wish that they would sell it. / It never does quite what I want but only what I tell it.” - Anonymous  Our maze designs are only as good as our obj. function.  While maximizing shortest path is a simple starting point for maze design, we can do better.  Maze Evaluation II Maze Evaluation II  Problem: Define an better maze objective function and argue why it leads to improved maze quality.  Features: Black/white holes, start/goal positions, shortest solution uniqueness, forward/backward branching, same-jump clusters, etc.  Forthcoming ICCG’10 paper: Rook Jumping Maze Design Considerations

 SLS is an anytime algorithm ◦ More search iterations  same/better maze design ◦ When generating many mazes, how does one balance utility of computational time versus utility of maze quality?  Restart Bandit Restart Bandit ◦ n-armed bandit MDP with # iterations as arms ◦ ε-greedy/softmax strategy for action selection  Restart SARSA Restart SARSA ◦ Use SARSA to map # iterations since restart and best maze evaluation to actions {GO, RESTART}

 Many variations are possible to avoid plagiarism: ◦ Use different regular tilings, e.g. triangular or hexagonal. ◦ Topological constraints may be added (e.g. impassable walls/tiles) or removed (e.g. toroidal wrap-around). ◦ Movement constraints may be varied as well.  Add diagonal moves  Queen Jumping Maze  Abbott's "no-U-turn" rule increases state complexity

 The best puzzle assignments have a high fun to source-lines-of-code (SLOC) ratio: ◦ RJMs are fun, novel, interesting mazes with simple representation and rules. ◦ RJMs are particularly well suited to application of graph algorithms (evaluation) and stochastic local search (design).  Everything you’d ever want to know about RJMs: ◦  Questions?