Uninformed Search Chapter 3.4.

Slides:



Advertisements
Similar presentations
Artificial Intelligent
Advertisements

Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Uninformed search strategies
Lecture 3: Uninformed Search
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Uninformed (also called blind) search algorithms) This Lecture Chapter Next Lecture Chapter (Please read lecture topic material before.
Properties of breadth-first search Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… +b d + b(b d -1) = O(b d+1 ) Space? O(b d+1 ) (keeps every node.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
1 Lecture 3: 18/4/1435 Uninformed search strategies Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Implementation and Evaluation of Search Methods. Use a stack data structure. 1. Push root node of tree on the stack. 2. Pop the top node off the stack.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
Uninformed (also called blind) search algorithms This Lecture Read Chapter Next Lecture Read Chapter (Please read lecture topic material.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
Searching the search space graph
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
1 Lecture 3 Uninformed Search
Lecture 3 Uninformed Search.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Artificial Intelligence
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Lecture 3: Uninformed Search
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
1 Solving problems by searching Chapter 3. Depth First Search Expand deepest unexpanded node The root is examined first; then the left child of the root;
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
Uninformed Search Methods
Uninformed (also called blind) search algorithms This Lecture Read Chapter Next Lecture Read Chapter (Please read lecture topic material.
Solving problems by searching A I C h a p t e r 3.
WEEK 5 LECTURE -A- 23/02/2012 lec 5a CSC 102 by Asma Tabouk Introduction 1 CSC AI Basic Search Strategies.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3 (Part-II) Section 3.4, 3.5, 3.6 Uninformed Search.
Lecture 3 Problem Solving through search Uninformed Search
Lecture 3: Uninformed Search
Uninformed (also called blind) search algorithms)
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Uninformed Search Strategies
Problem solving and search
Artificial Intelligence (CS 370D)
Uninformed Search CS171, Winter 2017
Uninformed Search Introduction to Artificial Intelligence
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
Solving problems by searching
CS 188: Artificial Intelligence Spring 2007
What to do when you don’t know anything know nothing
EA C461 – Artificial Intelligence
Artificial Intelligence
Searching for Solutions
Principles of Computing – UFCFA3-30-1
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Solving Problems by Searching
Search Strategies CMPT 420 / CMPG 720.
Presentation transcript:

Uninformed Search Chapter 3.4

Uninformed search strategies Uninformed: You have no clue whether one non-goal state is better than any other. Your search is blind. You don’t know if your current exploration is likely to be fruitful. Various blind strategies: Breadth-first search Uniform-cost search Depth-first search Iterative deepening search (generally preferred) Bidirectional search (preferred if applicable)

Uninformed search strategies Queue for Frontier: FIFO? LIFO? Priority? Goal-Test: When inserted into Frontier? When removed? Tree Search or Graph Search: Forget Explored nodes? Remember them?

Breadth-first search Expand shallowest unexpanded node Frontier (or fringe): nodes in queue to be explored Frontier is a first-in-first-out (FIFO) queue, i.e., new successors go at end of the queue. Goal-Test when inserted. Is A a goal state?

Breadth-first search Expand shallowest unexpanded node Frontier is a FIFO queue, i.e., new successors go at end Expand A to B, C: frontier = [B,C] Is B or C a goal state?

Breadth-first search Expand shallowest unexpanded node Frontier is a FIFO queue, i.e., new successors go at end Expand B to D, E: frontier=[C,D,E] Is D or E a goal state?

Breadth-first search Expand shallowest unexpanded node Frontier is a FIFO queue, i.e., new successors go at end Expand C to F, G: fringe=[D,E,F,G] Is F or G a goal state?

Example BFS

Properties of breadth-first search Complete? Yes it always reaches goal (if b is finite) Time? 1+b+b2+b3+… +bd + (bd+1-b)) = O(bd+1) (this is the number of nodes we generate) Space? O(bd+1) (keeps every node in memory, either in fringe or on a path to fringe). Optimal? Yes (if we guarantee that deeper solutions are less optimal, e.g. step-cost=1). Space is the bigger problem (more than time)

Uniform-cost search Uniform-cost Search: Breadth-first is only optimal if path cost is a non-decreasing function of depth, i.e., f(d) ≥ f(d-1); e.g., constant step cost, as in the 8-puzzle. Can we guarantee optimality for any positive step cost? Uniform-cost Search: Expand node with smallest path cost g(n). Frontier is a priority queue, i.e., new successors are merged into the queue sorted by g(n). Remove successor states already on queue w/higher g(n). Goal-Test when node is popped off queue.

Uniform-cost search Uniform-cost Search: Expand node with smallest path cost g(n). Proof of Completeness: Given that every step will cost more than 0, and assuming a finite branching factor, there is a finite number of expansions required before the total path cost is equal to the path cost of the goal state. Hence, we will reach it. Proof of optimality given completeness: Assume UCS is not optimal. Then there must be an (optimal) goal state with path cost smaller than the found (suboptimal) goal state (invoking completeness). However, this is impossible because UCS would have expanded that node first by definition. Contradiction.

Uniform-cost search Implementation: Frontier = queue ordered by path cost. Equivalent to breadth-first if all step costs all equal. Complete? Yes, if step cost ≥ ε (otherwise it can get stuck in infinite loops) Time? # of nodes with path cost ≤ cost of optimal solution. Space? # of nodes with path cost ≤ cost of optimal solution. Optimal? Yes, for any step cost ≥ ε

6 1 A D F 3 1 2 S 4 8 G B E 1 20 C The graph above shows the step-costs for different paths going from the start (S) to the goal (G). Use uniform cost search to find the optimal path to the goal. Exercise for at home

Depth-first search Expand deepest unexpanded node Frontier = Last In First Out (LIFO) queue, i.e., new successors go at the front of the queue. Goal-Test when inserted. Is A a goal state?

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[B,C] Is B or C a goal state?

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[D,E,C] Is D or E a goal state?

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[H,I,E,C] Is H or I a goal state?

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[I,E,C]

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[E,C]

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[J,K,C] Is J or K a goal state?

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[K,C]

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[C]

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[F,G] Is F or G a goal state?

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[L,M,G] Is L or M a goal state?

Depth-first search Expand deepest unexpanded node Frontier = LIFO queue, i.e., put successors at front queue=[M,G]

Properties of depth-first search Complete? No: fails in infinite-depth spaces Can modify to avoid repeated states along path Time? O(bm) with m=maximum depth terrible if m is much larger than d but if solutions are dense, may be much faster than breadth-first Space? O(bm), i.e., linear space! (we only need to remember a single path + expanded unexplored nodes) Optimal? No (It may find a non-optimal goal first) B C

Iterative deepening search To avoid the infinite depth problem of DFS, we can decide to only search until depth L, i.e. we don’t expand beyond depth L.  Depth-Limited Search What if solution is deeper than L?  Increase L iteratively.  Iterative Deepening Search As we shall see: this inherits the memory advantage of Depth-First search, and is better in terms of time complexity than Breadth first search.

Iterative deepening search L=0

Iterative deepening search L=1

Iterative deepening search L=2

Iterative Deepening Search L=3

Iterative deepening search Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd = O(bd) For b = 10, d = 5, NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450

Properties of iterative deepening search Complete? Yes Time? O(bd) Space? O(bd) Optimal? Yes, if step cost = 1 or increasing function of depth.

Bidirectional Search Idea What does searching backwards from G mean simultaneously search forward from S and backwards from G stop when both “meet in the middle” need to keep track of the intersection of 2 open sets of nodes What does searching backwards from G mean need a way to specify the predecessors of G this can be difficult, e.g., predecessors of checkmate in chess? which to take if there are multiple goal states? where to start if there is only a goal test, no explicit list?

Bi-Directional Search Complexity: time and space complexity are:

Summary of algorithms Criterion Breadth-First Uniform-Cost Depth-First Depth-Limited Iterative Deepening DLS Complete? Yes No Time O(bd) O(bC*/ε) O(bm) O(bl) Space Optimal? Generally the preferred uninformed search strategy

Repeated states Failure to detect repeated states can turn a linear problem into an exponential one! Test is often implemented as a hash table.

Solutions to Repeated States B S B C C C S B S State Space Example of a Search Tree Graph search never generate a state generated before must keep track of all possible states (uses a lot of memory) e.g., 8-puzzle problem, we have 9! = 362,880 states approximation for DFS/DLS: only avoid states in its (limited) memory: avoid looping paths. Graph search optimal for BFS and UCS, not for DFS. optimal but memory inefficient

Summary Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms http://www.cs.rmit.edu.au/AI-Search/Product/ http://aima.cs.berkeley.edu/demos.html (for more demos)