Chapter 4 Search in State Spaces Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University

Slides:



Advertisements
Similar presentations
Lecture Notes on AI-NN Chapter 5 Information Processing & Utilization.
Advertisements

CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 2 - Search.
Review: Search problem formulation
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Part2 AI as Representation and Search
Adversarial Search Chapter 6 Section 1 – 4.
Blind Search1 Solving problems by searching Chapter 3.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
CSC 423 ARTIFICIAL INTELLIGENCE
Artificial Intelligence (CS 461D)
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Search in AI.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Mahgul Gulzai Moomal Umer Rabail Hafeez
Problem Solving by Searching
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
Review: Search problem formulation
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Problem Solving and Search in AI Heuristic Search
HEURISTIC SEARCH. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe.
1 Midterm Review cmsc421 Fall Outline Review the material covered by the midterm Questions?
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
State-Space Searches.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Problem Solving and Search Andrea Danyluk September 11, 2013.
Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or.
Chapter 12 Adversarial Search. (c) 2000, 2001 SNU CSE Biointelligence Lab2 Two-Agent Games (1) Idealized Setting  The actions of the agents are interleaved.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Problem solving by search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Knowledge Search CPTR 314.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Solving problems by searching A I C h a p t e r 3.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance.
February 18, 2016Introduction to Artificial Intelligence Lecture 8: Search in State Spaces III 1 A General Backtracking Algorithm Sanity check function.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3: Search, A*
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
Adversarial Search Chapter Two-Agent Games (1) Idealized Setting – The actions of the agents are interleaved. Example – Grid-Space World – Two.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Lecture 3: Uninformed Search
Artificial Intelligence Problem solving by searching CSC 361
Artificial Intelligence Chapter 12 Adversarial Search
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Artificial Intelligence Chapter 9 Heuristic Search
Artificial Intelligence Chapter 8 Uninformed Search
CS621: Artificial Intelligence
CS 416 Artificial Intelligence
Solving Problems by Searching
Presentation transcript:

Chapter 4 Search in State Spaces Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University

Outline  Formulating the state space of a problem  Strategies for State Space Search Uninformed search: BFS,DFS Heuristic Search: A*, Hill-climbing Adversarial Search: MiniMax  Summary

State Space  a state space is a description of a configuration of discrete states used as a simple model of machines/problems. Formally, it can be defined as a tuple [N, A, S, G] where: N is a set of statesset A is a set of arcs connecting the states S is a nonempty subset of N that contains start statessubset G is a nonempty subset of N that contains the goal states.

State space search  State space search is a process used in the field of AI in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property. State space is implicit the typical state space graph is much too large to generate and store in memory nodes are generated as they are explored, and typically discarded thereafter A solution to an instance may consist of the goal state itself, or of a path from some initial state to the goal state.

Formulating the State Space  Explicit State Space Graph List all possible states and their transformation  Implicit State Space Graph State space is described by only essential states and their transformation rules

Explicit State Space Graph  8-puzzle problem state description  3-by-3 array: each cell contains one of 1-8 or blank symbol two state transition descriptions  84 moves: one of 1-8 numbers moves up, down, right, or left  4 moves: one black symbol moves up, down, right, or left

Explicit State Space Graph cont. Part of State space graph for 8-puzzle The number of nodes in the state-space graph: 9! ( = 362,880 ) Only small problem can be described by Explicit State-Space Graph

Implicit State Space Graph  Essential States start: (2, 8, 3, 1, 6, 4, 7, 0, 5)  Goals (1, 2, 3, 8, 0, 4, 7, 6, 5)  Rules

Implicit State Space Graph cont.  Rules R1: if A(I,J)=0 and J>0 then A(I,J):=A(I,J-1), A(I,J-1):=0 ( 空格左移 ) R2: if A(I,J)=0 and I>0 then A(I,J):=A(I-1,J), A(I-1,J):=0 ( 空格上移 ) R3: if A(I,J)=0 and J<2 then A(I,J):=A(I,J+1), A(I,J+1):=0 ( 空格右移 ) R4: if A(I,J)=0 and I<2 then A(I,J):=A(I+1,J), A(I+1,J):=0 ( 空格下移 ) J I A(0,0)A(0,1)A(0,2) A(1,0)A(1,1)A(1,2) A(2,0)A(2,1)A(2,2) Implicit State-Space Graph (rules & essential states) uses less memory than Explicit State-Space Graph

Search strategy  For huge search space we need, Careful formulation Implicit representation of large search graphs Efficient search method  Uninformed Search Breadth-first search Depth-first search  Heuristic Search

Breadth-first search  Advantage Finds the path of minimal length to the goal.  Disadvantage Requires the generation and storage of a tree whose size is exponential the depth of the shallowest goal node  Extension Branch & bound

Depth-first search

DFS  Advantage Low memory size: linear in the depth bound  saves only that part of the search tree consisting of the path currently being explored plus traces  Disadvantage No guarantee for the minimal state length to goal state The possibility of having to explore a large part of the search space

Iterative Deepening  Simply put an upper limit on the depth (cost) of paths allowed  Motivation: e.g. inherent limit on range of a vehicle  tell me all the places I can reach on 10 litres of petrol prevents search diving into deep solutions might already have a solution of known depth (cost), but are looking for a shallower (cheaper) one

Iterative Deepening cont  Memory Usage Same as DFS O(bd)  Time Usage: Worse than BFS because nodes at each level will be expanded again at each later level BUT often is not much worse because almost all the effort is at the last level anyway, because trees are “leaf –heavy”

Heuristic Search  Using Evaluation Functions  A General Graph-Searching Algorithm  Algorithm A* Algorithm description Admissibility Consistence

Using Evaluation Functions  Best-first search (BFS) = Heuristic search proceeds preferentially using heuristics Basic idea  Heuristic evaluation function : based on information specific to the problem domain  Expand next that node, n, having the smallest value of  Terminate when the node to be expanded next is a goal node  Eight-puzzle The number of tiles out of places: measure of the goodness of a state description

Using Evaluation Functions cont. A Possible Result of a Heuristic Search Procedure

Using Evaluation Functions cont.  A refine evaluation function

A General Graph-Searching Algorithm 1. Create a search tree, Tr, with the start node n 0  put n 0 on ordered list OPEN 2. Create empty list CLOSED 3. If OPEN is empty, exit with failure 4. Select the first node n on OPEN  remove it  put it on CLOSED 5. If n is a goal node, exit successfully: obtain solution by tracing a path backward along the arcs from n to n 0 in Tr 6. Expand n, generating a set M of successors + install M as successors of n by creating arcs from n to each member of M 7. Reorder the list OPEN: by arbitrary scheme or heuristic merit 8. Go to step 3

A General Graph-Searching Algorithm cont.  Breadth-first search New nodes are put at the end of OPEN (FIFO) Nodes are not reordered  Depth-first search New nodes are put at the beginning of OPEN (LIFO)  Best-first (heuristic) search OPEN is reordered according to the heuristic merit of the nodes A* is an example of BFS  The algorithm was first described in 1968 by Peter Hart, Nils Nilsson, and Bertram Raphael.

Algorithm A *  Reorders the nodes on OPEN according to increasing values of  Some additional notation h(n): the actual cost of the minimal cost path between n and a goal node g(n): the cost of a minimal cost path from n 0 to n f(n) = g(n) + h(n): the cost of a minimal cost path from n 0 to a goal node over all paths via node n f(n 0 ) = h(n 0 ): the cost of a minimal cost path from n 0 to a goal node estimate of h(n)

Algorithm A * Cont. (Procedures) 1. Create a search graph, G, consisting solely of the start node, n 0  put n 0 on a list OPEN 2. Create a list CLOSED: initially empty 3. If OPEN is empty, exit with failure 4. Select the first node on OPEN  remove it from OPEN  put it on CLOSED: node n 5. If n is a goal node, exit successfully: obtain solution by tracing a path along the pointers from n to n 0 in G 6. Expand node n, generating the set, M, of its successors that are not already ancestors of n in G  install these members of M as successors of n in G

Algorithm A * Cont. (Procedures) 7. Establish a pointer to n from each of those members of M that were not already in G  add these members of M to OPEN  for each member, m, redirect its pointer to n if the best path to m found so far is through n  for each member of M already on CLOSED, redirect the pointers of each of its descendants in G 8. Reorder the list OPEN in order of increasing values 9. Go to step 3

Properties of A* Algorithm  Admissibility A heuristic is said to be admissible  if it is no more than the lowest-cost path to the goal.  if it never overestimates the cost of reaching the goal.  An admissible heuristic is also known as an optimistic heuristic.  A* is guaranteed to find an optimal path to the goal with the following conditions: Each node in the graph has a finite number of successors All arcs in the graph have costs greater than some positive amount  For all nodes in the search graph,

Properties of A* Algorithm cont.  The Consistency (or Monotone) condition Estimator h holds monotone condition, if (n j is a successor of n i )  A type of triangle inequality

Properties of A* Algorithm cont.  Consistency condition implies values of the nodes are monotonically nondecreasing as we move away from the start node  Theorem: If on is satisfied with the consistency condition, then when A* expands a node n, it has already found an optimal path to n

Relationships Among Search Algorithm

Adversarial Search, Game Playing  Two-Agent Games Idealized Setting: The actions of the agents are interleaved.  Example Grid-Space World Two robots : “Black” and “White” Goal of Robots  White : to be in the same cell with Black  Black : to prevent this from happening After settling on a first move, the agent makes the move, senses what the other agent does, and then repeats the planning process in sense/plan/act fashion.

Two-Agent Games: example

MiniMax Procedure (1)  Two player : MAX and MIN  Task : find a “best” move for MAX  Assume that MAX moves first, and that the two players move alternately.  MAX node nodes at even-numbered depths correspond to positions in which it is MAX’s move next  MIN node nodes at odd-numbered depths correspond to positions in which it is MIN’s move next

MiniMax Procedure (2)  Complete search of most game graphs is impossible. For Chess, nodes  centuries to generate the complete search graph, assuming that a successor could be generated in 1/3 of a nanosecond  The universe is estimated to be on the order of 10 8 centuries old. Heuristic search techniques do not reduce the effective branching factor sufficiently to be of much help.  Can use either breadth-first, depth-first, or heuristic methods, except that the termination conditions must be modified.

MiniMax Procedure (3)  Estimate of the best-first move apply a static evaluation function to the leaf nodes measure the “worth” of the leaf nodes. The measurement is based on various features thought to influence this worth. Usually, analyze game trees to adopt the convention  game positions favorable to MAX cause the evaluation function to have a positive value  positions favorable to MIN cause the evaluation function to have negative value  Values near zero correspond to game positions not particularly favorable to either MAX or MIN.

MiniMax Procedure (4)  Good first move extracted Assume that MAX were to choose among the tip nodes of a search tree, he would prefer that node having the largest evaluation.  The backed-up value of a MAX node parent of MIN tip nodes is equal to the maximum of the static evaluations of the tip nodes. MIN would choose that node having the smallest evaluation.

MiniMax Procedure (5)  After the parents of all tip nods have been assigned backed-up values, we back up values another level. MAX would choose that successor MIN node with the largest backed-up value MIN would choose that successor MAX node with the smallest backed-up value. Continue to back up values, level by level from the leaves, until the successors of the start node are assigned backed-up values.

Tic-tac-toe example

The Improving of Adversarial Search  Waiting for a stable situation  Assistant search  Using knowledge  Taking a risk

Summary  How to formulate the state space of a problem  Strategies for State Space Search Uninformed search: BFS,DFS Heuristic Search: A*, Hill-climbing Adversarial Search: MiniMax

1,3,0,0,0,0,0, 0,0,0 1,4,0,0,0,0,0, 0,0,0 2,4,0,0,0,0,0, 0,0,0 3,4,0,0,0,0,0, 0,0,0 2,3,0,0,0,0,0, 0,0,0