G5AIAI Introduction to AI

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Uninformed search strategies
Artificial Intelligence Solving problems by searching
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
G5BAIM Artificial Intelligence Methods Graham Kendall Blind Searches.
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Lets remember about Goal formulation, Problem formulation and Types of Problem. OBJECTIVE OF TODAY’S LECTURE Today we will discus how to find a solution.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence (CS 461D)
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CS 380: Artificial Intelligence Lecture #3 William Regli.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
A* Search Introduction to AI. What is an A* Search? A greedy search method minimizes the cost to the goal by using an heuristic function, h(n). It works.
Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Russell & Norvig Chapter 3]
Artificial Intelligence Chapter 3: Solving Problems by Searching
Introduction to Artificial Intelligence Blind Search Ruth Bergman Fall 2004.
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
Artificial Intelligence Methods Rao Vemuri Searching - 2.
Solving problems by searching
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
1 Lecture 3 Uninformed Search
Problem-solving agents
PROBLEM SOLVING BY SEARCHING (1)
Artificial Intelligence Problem solving by searching CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
For Friday Finish chapter 3 Homework: –Chapter 3, exercise 6 –May be done in groups. –Clarification on part d: an “action” must be running the program.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department Read Russell & Norvig Sections (Also read Chapters 1 and 2 if.
Do you drive? Have you thought about how the route plan is created for you in the GPS system? How would you implement a cross-and- nought computer program?
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
G5BAIM Artificial Intelligence Methods Graham Kendall Searching.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
For Friday Read chapter 4, sections 1 and 2 Homework –Chapter 3, exercise 7 –May be done in groups.
Artificial Intelligence Problem solving by searching.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Introduction to Artificial Intelligence (G51IAI)
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches - Introduction.
Solving problems by searching A I C h a p t e r 3.
G5AIAI Introduction to AI Graham Kendall Heuristic Searches.
G5AIAI Introduction to AI
Search 3.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Lecture 3 Problem Solving through search Uninformed Search
Lecture 3: Uninformed Search
Artificial Intelligence (CS 370D)
Artificial Intelligence (CS 370D)
Solving problems by searching
Searching for Solutions
G5BAIM Artificial Intelligence Methods
Presentation transcript:

G5AIAI Introduction to AI Graham Kendall Blind Searching

Initial State Operator Problem Definition - 1 Initial State The initial state of the problem, defined in some suitable manner Operator A set of actions that moves the problem from one state to another

Neighbourhood (Successor Function) Problem Definition - 1 Neighbourhood (Successor Function) The set of all possible states reachable from a given state State Space The set of all states reachable from the initial state

Problem Definition - 2 Goal Test A test applied to a state which returns if we have reached a state that solves the problem Path Cost How much it costs to take a particular path

Problem Definition - Example 5 4 6 1 8 7 3 2 1 4 7 2 5 8 3 6 1 2 3 4 5 6 7 8 1 2 3 8 4 7 6 5 Initial State Goal State

Problem Definition - Example States A description of each of the eight tiles in each location that it can occupy. It is also useful to include the blank Operators The blank moves left, right, up or down

Problem Definition - Example Goal Test The current state matches a certain state (e.g. one of the ones shown on previous slide) Path Cost Each move of the blank costs 1

Problem Definition - Datatype Datatype PROBLEM Components INITIAL-STATE, OPERATORS, GOAL-TEST, PATH-COST-FUNCTION

Does our search method actually find a solution? How Good is a Solution? Does our search method actually find a solution? Is it a good solution? Path Cost Search Cost (Time and Memory) Does it find the optimal solution? But what is optimal?

Completeness Time Complexity Evaluating a Search Completeness Is the strategy guaranteed to find a solution? Time Complexity How long does it take to find a solution?

Space Complexity Optimality Evaluating a Search Space Complexity How much memory does it take to perform the search? Optimality Does the strategy find the optimal solution where there are several solutions?

Search Trees x o ………..

ISSUES Search trees grow very quickly The size of the search tree is governed by the branching factor Even this simple game has a complete search tree of 984,410 potential nodes The search tree for chess has a branching factor of about 35

Implementing a Search - What we need to store State This represents the state in the state space to which this node corresponds Parent-Node This points to the node that generated this node. In a data structure representing a tree it is usual to call this the parent node

Implementing a Search - What we need to store Operator The operator that was applied to generate this node Depth The number of nodes from the root (i.e. the depth) Path-Cost The path cost from the initial state to this node

Implementing a Search - Datatype Datatype node Components: STATE, PARENT-NODE, OPERATOR, DEPTH, PATH-COST

Using a Tree – The Obvious Solution? Advantages It’s intuitive Parent’s are automatically catered for

Using a Tree – The Obvious Solution? But It can be wasteful on space It can be difficult the implement, particularly if there are varying number of children (as in tic-tac-toe) It is not always obvious which node to expand next. We may have to search the tree looking for the best leaf node (sometimes called the fringe or frontier nodes). This can obviously be computationally expensive

Using a Tree – Maybe not so obvious Therefore It would be nice to have a “simpler” data structure to represent our tree And it would be nice if the next node to be expanded was an O(1) operation

End Function General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

General Search Loop do End Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

If nodes is empty then return failure General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

node = REMOVE-FRONT(nodes) General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

If GOAL-TEST[problem] applied to STATE(node) succeeds then return node General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

Blind Searches

Blind Searches - Characteristics Simply searches the State Space Can only distinguish between a goal state and a non-goal state Sometimes called an uninformed search as it has no knowledge about its domain

Blind Searches - Characteristics Blind Searches have no preference as to which state (node) that is expanded next The different types of blind searches are characterised by the order in which they expand the nodes. This can have a dramatic effect on how well the search performs when measured against the four criteria we defined in an earlier lecture

Breadth First Search - Method Expand Root Node First Expand all nodes at level 1 before expanding level 2 OR Expand all nodes at level d before expanding nodes at level d+1

Breadth First Search - Implementation Use a queueing function that adds nodes to the end of the queue Function BREADTH-FIRST-SEARCH(problem) returns a solution or failure Return GENERAL-SEARCH(problem,ENQUEUE-AT-END)

Breadth First Search - Implementation G A B C A B C D E D E F G

Evaluating Breadth First Search Observations Very systematic If there is a solution breadth first search is guaranteed to find it If there are several solutions then breadth first search will always find the shallowest goal state first and if the cost of a solution is a non-decreasing function of the depth then it will always find the cheapest solution

Evaluating Breadth First Search Evaluating against four criteria Complete? : Yes Optimal? : Yes Space Complexity : 1 + b + b2 + b3 + ... + bd i.e O(bd) Time Complexity : 1 + b + b2 + b3 + ... + bd i.e. O(bd) Where b is the branching factor and d is the depth of the search tree Note : The space/time complexity could be less as the solution could be found anywhere on the dth level.

Exponential Growth Exponential growth quickly makes complete state space searches unrealistic If the branch factor was 10, by level 5 we would need to search 100,000 nodes (i.e. 105)

Exponential Growth Time and memory requirements for breadth-first search, assuming a branching factor of 10, 100 bytes per node and searching 1000 nodes/second

Exponential Growth - Observations Space is more of a factor to breadth first search than time Time is still an issue. Who has 35 years to wait for an answer to a level 12 problem (or even 128 days to a level 10 problem) It could be argued that as technology gets faster then exponential growth will not be a problem. But even if technology is 100 times faster we would still have to wait 35 years for a level 14 problem and what if we hit a level 15 problem!

Uniform Cost Search (vs BFS) BFS will find the optimal (shallowest) solution so long as the cost is a function of the depth Uniform Cost Search can be used when this is not the case and uniform cost search will find the cheapest solution provided that the cost of the path never decreases as we proceed along the path Uniform Cost Search works by expanding the lowest cost node on the fringe.

Uniform Cost Search - Example B G C A 1 10 5 15 BFS will find the path SAG, with a cost of 11, but SBG is cheaper with a cost of 10 Uniform Cost Search will find the cheaper solution (SBG). It will find SAG but will not see it as it is not at the head of the queue

Depth First Search - Method Expand Root Node First Explore one branch of the tree before exploring another branch

Depth First Search - Implementation Use a queueing function that adds nodes to the front of the queue Function DEPTH-FIRST-SEARCH(problem) returns a solution or failure Return GENERAL-SEARCH(problem,ENQUEUE-AT-FRONT)

Depth First Search - Observations Only needs to store the path from the root to the leaf node as well as the unexpanded nodes. For a state space with a branching factor of b and a maximum depth of m, DFS requires storage of bm nodes Time complexity for DFS is bm in the worst case

Depth First Search - Observations If DFS goes down a infinite branch it will not terminate if it does not find a goal state. If it does find a solution there may be a better solution at a lower level in the tree. Therefore, depth first search is neither complete nor optimal.

Depth Limited Search (vs DFS) DFS may never terminate as it could follow a path that has no solution on it DLS solves this by imposing a depth limit, at which point the search terminates that particular branch

Depth Limited Search - Observations Can be implemented by the general search algorithm using operators which keep track of the depth Choice of depth parameter is important Too deep is wasteful of time and space Too shallow and we may never reach a goal state

Depth Limited Search - Observations If the depth parameter, l, is set deep enough then we are guaranteed to find a solution if one exists Therefore it is complete if l>=d (d=depth of solution) Space requirements are O(bl) Time requirements are O(bl) DLS is not optimal

Map of Romania Bucharest Zerind Arad Timisoara Lugoj Mehadia Dobreta Craiova Rimnicu Vilcea Sibiu Pitesti Giurgui Urziceni Hirsova Eforie Vaslui Iasi Neamt Odarea Fararas On the Romania map there are 20 towns so any town is reachable in 19 steps In fact, any town is reachable in 9 steps

Iterative Deepening Search (vs DLS) The problem with DLS is choosing a depth parameter Setting a depth parameter to 19 is obviously wasteful if using DLS IDS overcomes this problem by trying depth limits of 0, 1, 2, …, n. In effect it is combining BFS and DFS

Iterative Deepening Search - Observations IDS may seem wasteful as it is expanding the same nodes many times. In fact, when b=10 only about 11% more nodes are expanded than for a BFS or a DLS down to level d Time Complexity = O(bd) Space Complexity = O(bd) For large search spaces, where the depth of the solution is not known, IDS is normally the preferred search method

Repeated States - Three Methods Do not generate a node that is the same as the parent node Or Do not return to the state you have just come from Do not create paths with cycles in them. To do this we can check each ancestor node and refuse to create a state that is the same as this set of nodes

Repeated States - Three Methods Do not generate any state that is the same as any state generated before. This requires that every state is kept in memory (meaning a potential space complexity of O(bd)) The three methods are shown in increasing order of computational overhead in order to implement them

Blind Searches – Summary B = Branching factor D = Depth of solution M = Maximum depth of the search tree L = Depth Limit

G5AIAI Introduction to AI Graham Kendall End of Blind Searches