Announcements This Friday Project 1 due Talk by Jeniya Tabassum

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Announcements Homework 1: Search Project 1: Search Sections
CS 343H: Artificial Intelligence
Review: Search problem formulation
CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore.
Problem Solving and Search in AI Heuristic Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Quiz 1 : Queue based search  q1a1: Any complete search algorithm must have exponential (or worse) time complexity.  q1a2: DFS requires more space than.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Heuristic Search Andrea Danyluk September 16, 2013.
Advanced Artificial Intelligence Lecture 2: Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Warm-up: Tree Search Solution
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Review: Tree search Initialize the frontier using the starting state
Announcements Homework 1 Questions 1-3 posted. Will post remainder of assignment over the weekend; will announce on Slack. No AI Seminar today.
CS 188: Artificial Intelligence
Heuristic Functions.
Department of Computer Science
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
CS 188: Artificial Intelligence Spring 2007
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
The A* Algorithm Héctor Muñoz-Avila.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
CS 4100 Artificial Intelligence
Informed search algorithms
Informed search algorithms
Lecture 8 Heuristic search Motivational video Assignment 2
Artificial Intelligence
Announcements Homework 1: Search Project 1: Search
Warm-up: DFS Graph Search
CSE 473 University of Washington
CS 188: Artificial Intelligence Fall 2006
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
CS 188: Artificial Intelligence Spring 2006
HW 1: Warmup Missionaries and Cannibals
Artificial Intelligence
CS 416 Artificial Intelligence
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Informed Search Idea: be smart about what paths to try.
Informed Search.
Presentation transcript:

Announcements This Friday Project 1 due Talk by Jeniya Tabassum TweeTIME: A Minimally Supervised Method for Recognizing and Normalizing Time Expressions in Twitter

Recap: Search Search problem: Search tree: Search algorithm: States (configurations of the world) Actions and costs Successor function (world dynamics) Start state and goal test Search tree: Nodes: represent plans for reaching states Plans have costs (sum of action costs) Search algorithm: Systematically builds a search tree Chooses an ordering of the fringe (unexplored nodes) Optimal: finds least-cost plans

Uniform Cost Search Strategy: expand lowest path cost The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location … c  1 c  2 c  3 Start Goal [Demo: contours UCS empty (L3D1)] [Demo: contours UCS pacman small maze (L3D3)]

Video of Demo Contours UCS Pacman Small Maze

Informed Search

Search Heuristics A heuristic is: A function that estimates how close a state is to a goal Designed for a particular search problem Examples: Manhattan distance, Euclidean distance for pathing 10 5 11.2

Example: Heuristic Function h(x)

Example: Heuristic Function Heuristic: the number of the largest pancake that is still out of place 4 3 2 h(x) A* expanded 8100 ; Path cost = 33 UCS expanded 25263 . Path cost = 33 Greedy expanded 10 . Path cost = 41 [0, 7, 5, 3, 2, 1, 4, 6] (7, 0, 5, 3, 2, 1, 4, 6) (6, 4, 1, 2, 3, 5, 0, 7) (3, 2, 1, 4, 6, 5, 0, 7) (1, 2, 3, 4, 6, 5, 0, 7) (5, 6, 4, 3, 2, 1, 0, 7) (6, 5, 4, 3, 2, 1, 0, 7) (0, 1, 2, 3, 4, 5, 6, 7)

Greedy Search b … Strategy: expand a node that you think is closest to a goal state Heuristic: estimate of distance to nearest goal for each state A common case: Best-first takes you straight to the (wrong) goal Worst-case: like a badly-guided DFS b … For any search problem, you know the goal. Now, you have an idea of how far away you are from the goal. [Demo: contours greedy empty (L3D1)] [Demo: contours greedy pacman small maze (L3D4)]

Video of Demo Contours Greedy (Empty)

Video of Demo Contours Greedy (Pacman Small Maze)

A*: Combining UCS and Greedy Uniform-cost orders by path cost, or backward cost g(n) Greedy orders by goal proximity, or forward cost h(n) A* Search orders by the sum: f(n) = g(n) + h(n) g = 0 h=6 8 S g = 1 h=5 e h=1 a 1 1 3 2 g = 2 h=6 g = 9 h=1 S a d G g = 4 h=2 b d e h=6 h=5 1 h=2 h=0 1 g = 3 h=7 g = 6 h=0 c b g = 10 h=2 c G d h=7 h=6 g = 12 h=0 G Example: Teg Grenager

Admissible Heuristics A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal Examples: Coming up with admissible heuristics is most of what’s involved in using A* in practice. 15 4

Optimality of A* Tree Search: Blocking Proof: Imagine B is on the fringe Some ancestor n of A is on the fringe, too (maybe A!) Claim: n will be expanded before B f(n) is less or equal to f(A) f(A) is less than f(B) n expands before B All ancestors of A expand before B A expands before B A* search is optimal …

Properties of A* Uniform-Cost A* b b … …

UCS vs A* Contours Uniform-cost expands equally in all “directions” A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal [Demo: contours UCS / greedy / A* empty (L3D1)] [Demo: contours A* pacman small maze (L3D5)]

Video of Demo Contours (Empty) -- UCS

Video of Demo Contours (Empty) -- Greedy

Video of Demo Contours (Empty) – A*

Video of Demo Contours (Pacman Small Maze) – A*

Comparison Greedy Uniform Cost A*

A* Applications Video games Pathing / routing problems Resource planning problems Robot motion planning Language analysis Machine translation Speech recognition … [Demo: UCS / A* pacman tiny maze (L3D6,L3D7)] [Demo: guess algorithm Empty Shallow/Deep (L3D8)]

Video of Demo Pacman (Tiny Maze) – UCS / A*

Creating Heuristics

Creating Admissible Heuristics Most of the work in solving hard search problems optimally is in coming up with admissible heuristics Often, admissible heuristics are solutions to relaxed problems, where new actions are available Inadmissible heuristics are often useful too 15 366

Example: 8 Puzzle Start State Actions Goal State What are the states? How many states? What are the actions? How many successors from the start state? What should the costs be?

8 Puzzle I 8 Heuristic: Number of tiles misplaced Start State Goal State Heuristic: Number of tiles misplaced Why is it admissible? h(start) = This is a relaxed-problem heuristic 8 Average nodes expanded when the optimal path has… …4 steps …8 steps …12 steps UCS 112 6,300 3.6 x 106 TILES 13 39 227 Statistics from Andrew Moore

8 Puzzle II Start State Goal State What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles? Total Manhattan distance Why is it admissible? h(start) = Average nodes expanded when the optimal path has… …4 steps …8 steps …12 steps TILES 13 39 227 MANHATTAN 12 25 73 3 + 1 + 2 + … = 18

8 Puzzle III How about using the actual cost as a heuristic? Would it be admissible? Would we save on nodes expanded? What’s wrong with it? With A*: a trade-off between quality of estimate and work per node As heuristics get closer to the true cost, you will expand fewer nodes but usually do more work per node to compute the heuristic itself

Consistency of Heuristics Main idea: estimated heuristic costs ≤ actual costs Admissibility: heuristic cost ≤ actual cost to goal h(A) ≤ actual cost from A to G Consistency: heuristic “arc” cost ≤ actual cost for each arc h(A) – h(C) ≤ cost(A to C) Consequences of consistency: The f value along a path never decreases h(A) ≤ cost(A to C) + h(C) A* graph search is optimal A 1 C h=4 h=1 h=2 3 G

Optimality of A* Graph Search Sketch: consider what A* does with a consistent heuristic: Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours) Fact 2: For every state s, nodes that reach s optimally are expanded before nodes that reach s suboptimally Result: A* graph search is optimal … f  1 f  2 f  3

Optimality Tree search: Graph search: A* is optimal if heuristic is admissible UCS is a special case (h = 0) Graph search: A* optimal if heuristic is consistent UCS optimal (h = 0 is consistent) Consistency implies admissibility In general, most natural admissible heuristics tend to be consistent, especially if from relaxed problems