Presentation is loading. Please wait.

Presentation is loading. Please wait.

G5AIAI Introduction to AI

Similar presentations


Presentation on theme: "G5AIAI Introduction to AI"— Presentation transcript:

1 G5AIAI Introduction to AI
Graham Kendall Blind Searching

2 Initial State Operator
Problem Definition - 1 Initial State The initial state of the problem, defined in some suitable manner Operator A set of actions that moves the problem from one state to another

3 Neighbourhood (Successor Function)
Problem Definition - 1 Neighbourhood (Successor Function) The set of all possible states reachable from a given state State Space The set of all states reachable from the initial state

4 Problem Definition - 2 Goal Test A test applied to a state which returns if we have reached a state that solves the problem Path Cost How much it costs to take a particular path

5 Problem Definition - Example
5 4 6 1 8 7 3 2 1 4 7 2 5 8 3 6 1 2 3 4 5 6 7 8 1 2 3 8 4 7 6 5 Initial State Goal State

6 Problem Definition - Example
States A description of each of the eight tiles in each location that it can occupy. It is also useful to include the blank Operators The blank moves left, right, up or down

7 Problem Definition - Example
Goal Test The current state matches a certain state (e.g. one of the ones shown on previous slide) Path Cost Each move of the blank costs 1

8 Problem Definition - Datatype
Datatype PROBLEM Components INITIAL-STATE, OPERATORS, GOAL-TEST, PATH-COST-FUNCTION

9 Does our search method actually find a solution?
How Good is a Solution? Does our search method actually find a solution? Is it a good solution? Path Cost Search Cost (Time and Memory) Does it find the optimal solution? But what is optimal?

10 Completeness Time Complexity
Evaluating a Search Completeness Is the strategy guaranteed to find a solution? Time Complexity How long does it take to find a solution?

11 Space Complexity Optimality
Evaluating a Search Space Complexity How much memory does it take to perform the search? Optimality Does the strategy find the optimal solution where there are several solutions?

12 Search Trees x o ………..

13 ISSUES Search trees grow very quickly
The size of the search tree is governed by the branching factor Even this simple game has a complete search tree of 984,410 potential nodes The search tree for chess has a branching factor of about 35

14 Implementing a Search - What we need to store
State This represents the state in the state space to which this node corresponds Parent-Node This points to the node that generated this node. In a data structure representing a tree it is usual to call this the parent node

15 Implementing a Search - What we need to store
Operator The operator that was applied to generate this node Depth The number of nodes from the root (i.e. the depth) Path-Cost The path cost from the initial state to this node

16 Implementing a Search - Datatype
Datatype node Components: STATE, PARENT-NODE, OPERATOR, DEPTH, PATH-COST

17 Using a Tree – The Obvious Solution?
Advantages It’s intuitive Parent’s are automatically catered for

18 Using a Tree – The Obvious Solution?
But It can be wasteful on space It can be difficult the implement, particularly if there are varying number of children (as in tic-tac-toe) It is not always obvious which node to expand next. We may have to search the tree looking for the best leaf node (sometimes called the fringe or frontier nodes). This can obviously be computationally expensive

19 Using a Tree – Maybe not so obvious
Therefore It would be nice to have a “simpler” data structure to represent our tree And it would be nice if the next node to be expanded was an O(1) operation

20 End Function General Search
Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

21 nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem]))
General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

22 General Search Loop do End
Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

23 If nodes is empty then return failure
General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

24 node = REMOVE-FRONT(nodes)
General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

25 If GOAL-TEST[problem] applied to STATE(node) succeeds then return node
General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

26 nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem]))
General Search Function GENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes = MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) Loop do If nodes is empty then return failure node = REMOVE-FRONT(nodes) If GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes = QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem])) End End Function

27 Blind Searches

28 Blind Searches - Characteristics
Simply searches the State Space Can only distinguish between a goal state and a non-goal state Sometimes called an uninformed search as it has no knowledge about its domain

29 Blind Searches - Characteristics
Blind Searches have no preference as to which state (node) that is expanded next The different types of blind searches are characterised by the order in which they expand the nodes. This can have a dramatic effect on how well the search performs when measured against the four criteria we defined in an earlier lecture

30 Breadth First Search - Method
Expand Root Node First Expand all nodes at level 1 before expanding level 2 OR Expand all nodes at level d before expanding nodes at level d+1

31 Breadth First Search - Implementation
Use a queueing function that adds nodes to the end of the queue Function BREADTH-FIRST-SEARCH(problem) returns a solution or failure Return GENERAL-SEARCH(problem,ENQUEUE-AT-END)

32 Breadth First Search - Implementation
G A B C A B C D E D E F G

33 Evaluating Breadth First Search
Observations Very systematic If there is a solution breadth first search is guaranteed to find it If there are several solutions then breadth first search will always find the shallowest goal state first and if the cost of a solution is a non-decreasing function of the depth then it will always find the cheapest solution

34 Evaluating Breadth First Search
Evaluating against four criteria Complete? : Yes Optimal? : Yes Space Complexity : 1 + b + b2 + b bd i.e O(bd) Time Complexity : 1 + b + b2 + b bd i.e. O(bd) Where b is the branching factor and d is the depth of the search tree Note : The space/time complexity could be less as the solution could be found anywhere on the dth level.

35 Exponential Growth Exponential growth quickly makes complete state space searches unrealistic If the branch factor was 10, by level 5 we would need to search 100,000 nodes (i.e. 105)

36 Exponential Growth Time and memory requirements for breadth-first search, assuming a branching factor of 10, 100 bytes per node and searching 1000 nodes/second

37 Exponential Growth - Observations
Space is more of a factor to breadth first search than time Time is still an issue. Who has 35 years to wait for an answer to a level 12 problem (or even 128 days to a level 10 problem) It could be argued that as technology gets faster then exponential growth will not be a problem. But even if technology is 100 times faster we would still have to wait 35 years for a level 14 problem and what if we hit a level 15 problem!

38 Uniform Cost Search (vs BFS)
BFS will find the optimal (shallowest) solution so long as the cost is a function of the depth Uniform Cost Search can be used when this is not the case and uniform cost search will find the cheapest solution provided that the cost of the path never decreases as we proceed along the path Uniform Cost Search works by expanding the lowest cost node on the fringe.

39 Uniform Cost Search - Example
B G C A 1 10 5 15 BFS will find the path SAG, with a cost of 11, but SBG is cheaper with a cost of 10 Uniform Cost Search will find the cheaper solution (SBG). It will find SAG but will not see it as it is not at the head of the queue

40 Depth First Search - Method
Expand Root Node First Explore one branch of the tree before exploring another branch

41 Depth First Search - Implementation
Use a queueing function that adds nodes to the front of the queue Function DEPTH-FIRST-SEARCH(problem) returns a solution or failure Return GENERAL-SEARCH(problem,ENQUEUE-AT-FRONT)

42 Depth First Search - Observations
Only needs to store the path from the root to the leaf node as well as the unexpanded nodes. For a state space with a branching factor of b and a maximum depth of m, DFS requires storage of bm nodes Time complexity for DFS is bm in the worst case

43 Depth First Search - Observations
If DFS goes down a infinite branch it will not terminate if it does not find a goal state. If it does find a solution there may be a better solution at a lower level in the tree. Therefore, depth first search is neither complete nor optimal.

44 Depth Limited Search (vs DFS)
DFS may never terminate as it could follow a path that has no solution on it DLS solves this by imposing a depth limit, at which point the search terminates that particular branch

45 Depth Limited Search - Observations
Can be implemented by the general search algorithm using operators which keep track of the depth Choice of depth parameter is important Too deep is wasteful of time and space Too shallow and we may never reach a goal state

46 Depth Limited Search - Observations
If the depth parameter, l, is set deep enough then we are guaranteed to find a solution if one exists Therefore it is complete if l>=d (d=depth of solution) Space requirements are O(bl) Time requirements are O(bl) DLS is not optimal

47 Map of Romania Bucharest Zerind Arad Timisoara Lugoj Mehadia Dobreta Craiova Rimnicu Vilcea Sibiu Pitesti Giurgui Urziceni Hirsova Eforie Vaslui Iasi Neamt Odarea Fararas On the Romania map there are 20 towns so any town is reachable in 19 steps In fact, any town is reachable in 9 steps

48 Iterative Deepening Search (vs DLS)
The problem with DLS is choosing a depth parameter Setting a depth parameter to 19 is obviously wasteful if using DLS IDS overcomes this problem by trying depth limits of 0, 1, 2, …, n. In effect it is combining BFS and DFS

49 Iterative Deepening Search - Observations
IDS may seem wasteful as it is expanding the same nodes many times. In fact, when b=10 only about 11% more nodes are expanded than for a BFS or a DLS down to level d Time Complexity = O(bd) Space Complexity = O(bd) For large search spaces, where the depth of the solution is not known, IDS is normally the preferred search method

50 Repeated States - Three Methods
Do not generate a node that is the same as the parent node Or Do not return to the state you have just come from Do not create paths with cycles in them. To do this we can check each ancestor node and refuse to create a state that is the same as this set of nodes

51 Repeated States - Three Methods
Do not generate any state that is the same as any state generated before. This requires that every state is kept in memory (meaning a potential space complexity of O(bd)) The three methods are shown in increasing order of computational overhead in order to implement them

52 Blind Searches – Summary
B = Branching factor D = Depth of solution M = Maximum depth of the search tree L = Depth Limit

53 G5AIAI Introduction to AI
Graham Kendall End of Blind Searches


Download ppt "G5AIAI Introduction to AI"

Similar presentations


Ads by Google