Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Review: Search problem formulation
Announcements Course TA: Danny Kumar
Uninformed search strategies
Solving Problem by Searching
Properties of breadth-first search Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… +b d + b(b d -1) = O(b d+1 ) Space? O(b d+1 ) (keeps every node.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Tamara Berg CS Artificial Intelligence
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Solving problems by searching
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
1 Solving problems by searching 171, Class 2 Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
Advanced Artificial Intelligence Lecture 2: Search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
1 Solving problems by searching Chapter 3. Depth First Search Expand deepest unexpanded node The root is examined first; then the left child of the root;
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
1 search CS 331/531 Dr M M Awais REPRESENTATION METHODS Represent the information: Animals are generally divided into birds and mammals. Birds are further.
Pengantar Kecerdasan Buatan
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
Solving problems by searching A I C h a p t e r 3.
WEEK 5 LECTURE -A- 23/02/2012 lec 5a CSC 102 by Asma Tabouk Introduction 1 CSC AI Basic Search Strategies.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Solving problems by searching Chapter 3. Types of agents Reflex agent Consider how the world IS Choose action based on current percept Do not consider.
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Solving problems by searching
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Last time: Problem-Solving
ECE 448 Lecture 4: Search Intro
Artificial Intelligence
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
Solving problems by searching
Lecture 1B: Search.
CS 4100 Artificial Intelligence
Informed search algorithms
Artificial Intelligence
HW 1: Warmup Missionaries and Cannibals
HW 1: Warmup Missionaries and Cannibals
Solving Problems by Searching
CS440/ECE 448 Lecture 4: Search Intro
Presentation transcript:

Chapter 3 Solving problems by searching

Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known environments  The solution is a fixed sequence of actions  Search is the process of looking for the sequence of actions that reaches the goal  Once the agent begins executing the search solution, it can ignore its percepts (open-loop system) 2

Search problem components Initial state Actions Transition model – What is the result of performing a given action in a given state? Goal state Path cost – Assume that it is a sum of nonnegative step costs The optimal solution is the sequence of actions that gives the lowest path cost for reaching the goal Initial state Goal state 3

Example: Romania Initial state – Arad Actions – Go from one city to another Transition model – If you go from city A to city B, you end up in city B Goal state – Bucharest Path cost – Sum of edge costs On vacation in Romania; currently in Arad Flight leaves tomorrow from Bucharest 4

State space The initial state, actions, and transition model define the state space of the problem – The set of all states reachable from initial state by any sequence of actions – Can be represented as a directed graph where the nodes are states and links between nodes are actions What is the state space for the Romania problem? 5

Example: Vacuum world States – Agent location and dirt location – How many possible states? – What if there are n possible locations? Actions – Left, right, suck Transition model 6

Vacuum world state space graph 7

Example: The 8-puzzle States – Locations of tiles 8-puzzle: 181,440 states 15-puzzle: 1.3 trillion states 24-puzzle: states Actions – Move blank left, right, up, down Path cost – 1 per move Optimal solution of n-Puzzle is NP-hard 8

Example: Robot motion planning States – Real-valued coordinates of robot joint angles Actions – Continuous motions of robot joints Goal state – Desired final configuration (e.g., object is grasped) Path cost – Time to execute, smoothness of path, etc. 9

Other Real-World Examples Routing Touring VLSI layout Assembly sequencing Protein design 10

Tree Search  Let’s begin at the start node and expand it by making a list of all possible successor states  Maintain a fringe or a list of unexpanded states  At each step, pick a state from the fringe to expand  Keep going until you reach the goal state  Try to expand as few states as possible 11

Search tree “What if” tree of possible actions and outcomes The root node corresponds to the starting state The children of a node correspond to the successor states of that node’s state A path through the tree corresponds to a sequence of actions – A solution is a path ending in the goal state ……… … Starting state Successor state Action Goal state 12

Tree Search Algorithm Outline Initialize the fringe using the starting state While the fringe is not empty – Choose a fringe node to expand according to search strategy – If the node contains the goal state, return solution – Else expand the node and add its children to the fringe 13

Tree search example 14

Tree search example 15

Tree search example Fringe 16

Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: – Completeness: does it always find a solution if one exists? – Optimality: does it always find a least-cost solution? – Time complexity: number of nodes generated – Space complexity: maximum number of nodes in memory Time and space complexity are measured in terms of – b: maximum branching factor of the search tree – d: depth of the least-cost solution – m: maximum length of any path in the state space (may be infinite) 17

Uninformed search strategies Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Iterative deepening search 18

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end A DF BC EG 19

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end A DF BC EG 20

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end A DF BC EG 21

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end A DF BC EG 22

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end A DF BC EG 23

Properties of breadth-first search Complete? Yes (if branching factor b is finite) Optimal? Yes – if cost = 1 per step Time? Number of nodes in a b-ary tree of depth d: O(b d ) (d is the depth of the optimal solution) Space? O(bd)O(bd) Space is the bigger problem (more than time) 24

Uniform-cost search Expand least-cost unexpanded node Implementation: fringe is a queue ordered by path cost (priority queue) Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost is greater than some positive constant ε Optimal? Yes – nodes expanded in increasing order of path cost Time? Number of nodes with path cost ≤ cost of optimal solution (C*), O(b C*/ ε ) This can be greater than O(b d ): the search can explore long paths consisting of small steps before exploring shorter paths consisting of larger steps Space? O(b C*/ ε ) 25

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 26

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 27

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 28

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 29

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 30

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 31

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 32

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 33

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front A DF BC EG 34

Properties of depth-first search Complete? Fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path  complete in finite spaces Optimal? No – returns the first solution it finds Time? Could be the time to reach a solution at maximum depth m: O(b m ) Terrible if m is much larger than d But if there are lots of solutions, may be much faster than BFS Space? O(bm), i.e., linear space! 35

Iterative deepening search Use DFS as a subroutine 1.Check the root 2.Do a DFS searching for a path of length 1 3.If there is no path of length 1, do a DFS searching for a path of length 2 4.If there is no path of length 2, do a DFS searching for a path of length 3… 36

Iterative deepening search 37

Iterative deepening search 38

Iterative deepening search 39

Iterative deepening search 40

Properties of iterative deepening search Complete? Yes Optimal? Yes, if step cost = 1 Time? (d+1)b 0 + d b 1 + (d-1)b 2 + … + b d = O(b d ) Space? O(bd) 41

Informed search Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select the most promising one for expansion Greedy best-first search A* search 42

Heuristic function Heuristic function h(n) estimates the cost of reaching goal from node n Example: Start state Goal state 43

Heuristic for the Romania problem 44

Greedy best-first search Expand the node that has the lowest value of the heuristic function h(n) 45

Greedy best-first search example 46

Greedy best-first search example 47

Greedy best-first search example 48

Greedy best-first search example 49

Properties of greedy best-first search Complete? No – can get stuck in loops start goal 50

Properties of greedy best-first search Complete? No – can get stuck in loops Optimal? No 51

Properties of greedy best-first search Complete? No – can get stuck in loops Optimal? No Time? Worst case: O(b m ) Best case: O(bd) – If h(n) is 100% accurate Space? Worst case: O(b m ) 52

A * search Idea: avoid expanding paths that are already expensive The evaluation function f(n) is the estimated total cost of the path through node n to the goal: f(n) = g(n) + h(n) g(n): cost so far to reach n (path cost) h(n): estimated cost from n to goal (heuristic) 53

A * search example 54

A * search example 55

A * search example 56

A * search example 57

A * search example 58

A * search example 59

Properties of A* Complete? Yes – unless there are infinitely many nodes with f(n) ≤ C* Optimal? Yes Time? Number of nodes for which f(n) ≤ C* (exponential) Space? Exponential 60

Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) ≤ h * (n), where h * (n) is the true cost to reach the goal state from n An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: straight line distance never overestimates the actual road distance Theorem: If h(n) is admissible, A * is optimal 61

Designing heuristic functions Heuristics for the 8-puzzle h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (number of squares from desired location of each tile) h 1 (start) = 8 h 2 (start) = = 18 Are h 1 and h 2 admissible? 62

Comparison of search strategies AlgorithmComplete?Optimal? Time complexity Space complexity BFS UCS DFS IDS Greedy A* Yes No Yes If all step costs are equal Yes No O(b d ) Number of nodes with g(n) ≤ C* O(b m ) O(b d ) O(bm) O(bd) No Worst case: O(b m ) Yes Best case: O(bd) Number of nodes with g(n)+h(n) ≤ C* 63