# Artificial Intelligence Second Term Fourth Year (07 Batch)

## Presentation on theme: "Artificial Intelligence Second Term Fourth Year (07 Batch)"— Presentation transcript:

Artificial Intelligence Second Term Fourth Year (07 Batch)
Knowledge Representation

State: A state is a situation that an agent can find itself in.
Terminology State: A state is a situation that an agent can find itself in. Types of States: World States: The actual concrete situations in the real world. Representational States: The abstract descriptions of the real world that are used by the agent in deliberating about what to do. State Space: is a graph whose nodes are the set of all states, and whose links are the actions that transform one state into another.

Terminology Search Tree: A search tree is a tree in which the root node is the start state and the set of children for each node consists of the states reachable by taking any action. Search Node: A search node is a node in the search tree. Goal: A goal is a state that the agent is trying to reach. Action: An action is something that the agent can choose to do. Successor Function: A successor function describes the agent’s options: given a state, it returns a set of (action, state) pairs, where each state is the state reachable by taking the action. The Branching Factor: The branching factor in a search tree is the number of actions available to the agent.

To Solve a Problem: Define the problem precisely. Analyze the problem. Isolate and represent the task knowledge. Choose the best problem-solving technique and apply it.

Problems There are 3 general categories of problems in AI:
Single-agent path-finding problems. Two-player games. Constraint satisfaction problems.

Single Agent Pathfinding Problem
In these problems, in each case, we have a single problem-solver making the decisions, and the task is to find a sequence of primitive steps that take us from the initial location to the goal location. Famous examples: Rubik’s Cube (Erno Rubik, 1975). Sliding-Tile puzzle. Navigation - Travelling Salesman Problem.

Two Player Games In a two-player game, one must consider the moves of an opponent, and the ultimate goal is a strategy that will guarantee a win whenever possible. Two-player perfect information have received the most attention of the researchers till now. But, nowadays, researchers are starting to consider more complex games, many of them involve an element of chance. The best Chess, Checkers, and Othello players in the world are computer programs!

Constraint-Satisfaction Problems
In these problems, we also have a single-agent making all the decisions, but here we are not concerned with the sequence of steps required to reach the solution, but simply the solution itself. The task is to identify a state of the problem, such that all the constraints of the problem are satisfied. Famous Examples: Eight Queens Problem. Number Partitioning.

The Problem Spaces A problem space consists of a set of states of a problem and a set of operators that change the state. State : a symbolic structure that represents a single configuration of the problem in a sufficient detail to allow problem solving to proceed. Operator : a function that takes a state and maps it to another state.

A problem (or, more precisely, a problem instance) consists of:
A problem space – An initial state – A set of goal states » explicitly or implicitly characterized – A solution, i.e., » a sequence of operator applications leading from the initial state to a goal state » or, a path through the state space from initial to final state.

Representing State At any moment, the relevant world is represented as a state Initial (start) state: S An action (or an operation) changes the current state to another state (if it is applied): state transition An action can be taken (applicable) only if its precondition is met by the current state For a given state, there might be more than one applicable actions Goal state: a state satisfies the goal description or passes the goal test Dead-end state: a non-goal state to which no action is applicable

Representing States A state space can be organized as a graph:
The state-space representation of a problem is a triplet (I, O, G) where: I - initial state, O - a set of operators on states, G - goal states A state space can be organized as a graph: nodes: states in the space arcs: actions/operations A solution is a path from the initial state to a goal state. The size of a problem is usually described in terms of the number of states (or the size of the state space) that are possible. Chess has about states in a typical game.

Some Example Problems 8 - Puzzle
Given an initial configuration of 8 numbered tiles on a 3 x 3 board, move the tiles in such a way so as to produce a desired goal configuration of the tiles.

8 - Puzzle State: 3 x 3 array configuration of the tiles on the board.
Operators: Blank moves Left, Right, Up or Down. Initial State: A particular configuration of the board. Goal: A particular configuration of the board. Path Cost: each step costs 1, so the path cost is just the length of the path.

A portion of the state space representation of a 8-Puzzle problem
5 4 6 1 8 7 3 2 5 4 6 1 8 7 3 2 5 4 8 6 1 7 3 2 5 1 4 6 8 7 3 2 5 4 6 1 8 7 3 2 5 1 4 6 8 7 3 2

The 8 Queens Problem The goal of 8-Queen problem is to place eight queens on a chessboard such that no queen attacks any other. ( A queen attacks any piece in the same row, column or diagonal). States: any arrangement of 0 to 8 queens on board. Operators: add a queen to any square. Goal Test: 8 queens on board, none attacked.

Cryptarithmetic In crypt-arithmetic problems, letters stand for digits and the aim is to find a substitution of digits for letters such that the resulting sum is arithmetically correct. Example: FORTY Solution: F=2, O=9, R=7, etc. + TEN 850 + TEN SIXTY

Cryptarithmetic States: a cryptarithmetic puzzle with some letters replaced by digits. Operators: replace all occurrences of a letter with a digit not already appearing in the puzzle. Goal Test: puzzle only contains digits, and represents a correct sum.

Missionaries and Cannibals
Three cannibals and three missionaries come to a crocodile infested river. There is a boat on their side that can be used by either one or two persons. If cannibals outnumber the missionaries at any time, the cannibals eat the missionaries. How can they use the boat to cross the river so that all missionaries survive ? There are 3 missionaries, 3 cannibals, and 1 boat that can carry up to two people on one side of a river. Goal: Move all the missionaries and cannibals across the river. Constraint: Missionaries can never be outnumbered by cannibals on either side of river, or else the missionaries are killed. State: configuration of missionaries and cannibals and boat on each side of river. Operators: Move boat containing some set of occupants across the river (in either direction) to the other side.

Travelling Salesman Problem
Given a road map of n cities, find the shortest tour which visits every city on the map exactly once and then return to the original city (Hamiltonian circuit) (Geometric version): a complete graph of n vertices. n!/2n legal tours Find one legal tour that is shortest

An instance of the travelling salesman problem

Search of the traveling salesperson problem.

Remove 5 Sticks Given the following configuration of sticks, remove exactly 5 sticks in such a way that the remaining configuration forms exactly 3 squares.

Real World Problems: Route finding VLSI Layout Robot Navigation etc.

During search, a node can be in one of the three categories:
State-space search is the process of searching through a state space for a solution by making explicit a sufficient portion of an implicit state-space graph to include a goal node. Hence, initially V={S}, where S is the start node; when S is expanded, its successors are generated and those nodes are added to V and the associated arcs are added to E. This process continues until a goal node is generated (included in V) and identified (by goal test) During search, a node can be in one of the three categories: Not generated yet (has not been made explicit yet) OPEN: generated but not expanded CLOSED: expanded Search strategies differ mainly on how to select an OPEN node for expansion at each step of search

A General State Space Search Algorithm
Node n state description parent (may use a backpointer) (if needed) Operator used to generate n (optional) Depth of n (optional) Path cost from S to n (if available) OPEN list initialization: {S} node insertion/removal depends on specific search strategy CLOSED list initialization: {} organized by backpointers

open := {S}; closed :={ };
repeat n := select(open); /* select one node from open for expansion */ if n is a goal then exit with success; /* delayed goal testing */ expand(n) /* generate all children of n put these newly generated nodes in open (check duplicates) put n in closed (check duplicates) */ until open = {}; exit with failure

Search Strategies Search Directions
The objective of search procedure is to discover a path through a problem spaces from an initial configuration to a goal state. There are two directions in which a search could proceed: Forward, from that start states Backward, from the goal states

Forward Search: (Data-directed / Data-Driven Reasoning / Forward Chaining)
This search starts from available information and tries to draw conclusion regarding the situation or the goal attainment. This process continues until (hopefully) it generates a path that satisfies the goal condition.

Backward Search: (Goal directed/driven / Backward Chaining)
This search starts from expectations of what the goal is or what is to happen (hypothesis), then it seeks evidence that supports (or contradicts) those expectations (or hypothesis). The problem solver begins with the goal to be solved, then finds rules or moves that could be used to generate this goal and determine what conditions must be true to use them. These conditions become the new goals, sub goals, for the search. This process continues, working backward through successive sub goals, until (hopefully) a path is generated that leads back to the facts of the problem.

Goal-driven search uses knowledge of the goal to guide the search
Goal-driven search uses knowledge of the goal to guide the search. Use goal-driven search if; A goal or hypothesis is given in the problem or can easily be formulated. There are a large number of rules that match the facts of the problem and thus produce an increasing number of conclusions or goals. (inefficient) Problem data are not given but must be acquired by the problem solver. (impossible)

Data-driven search uses knowledge and constraints found in the given data to search along lines known to be true. Use data-driven search if: All or most of the data are given in the initial problem statement. There are a large number of potential goals, but there are only a few ways to use the facts and the given information of a particular problem. It is difficult to form a goal or hypothesis.

Evaluating Search Strategies
Completeness Guarantees finding a solution whenever one exists Time Complexity How long (worst or average case) does it take to find a solution? Usually measured in terms of the number of nodes expanded Space Complexity How much space is used by the algorithm? Usually measured in terms of the maximum size that the “OPEN" list becomes during the search Optimality/Admissibility If a solution is found, is it guaranteed to be an optimal or highest quality solution among several different solutions? For example, is it the one with minimum cost?

Uninformed vs. informed Search
Uninformed Search Strategies (Blind Search) Breadth-First search Depth-First search Uniform-Cost search Depth-First Iterative Deepening search Informed Search Strategies (Heuristic Search) Hill climbing Best-first search Greedy Search Beam search Algorithm A Algorithm A*

Blind Search/Uninformed Search
A blind search is a search that has no information about its domain. The only thing that a blind search can do is to distinguish a non-goal state from a goal state.

Breadth First Search explores the state space in a level by level fashion. Only when there are no more states to be explored at a given level does the algorithm move onto the next level. A A B B C C G G D D E E F F

BFS 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 Goal

BFS

BFS Time Complexity and Space Complexity:
- If we look at how BFS expands from the root we see that it first expands on a set number of nodes, say b. - On the second level it becomes b2. - On the third level it becomes b3. - And so on until it reaches bd for some depth d. 1+ b + b2 + b bd which is O(bd) Since all leaf nodes need to be stored in memory, space complexity is the same as time complexity.

What does this mean? From the four criteria, it means
BFS As you can see BFS is: Very systematic Guaranteed to find a solution What does this mean? From the four criteria, it means BFS is complete. If there exists an answer, it will be found (b should be finite). BFS is optimal (if cost = 1 per step). The path from initial state to goal state is shallow.

G D E F

A distance from A H visit(A) B C G D E F get Undiscovered Queue: A Fringe Active Finished

A H F discovered B C G D E F 1 F Undiscovered Queue: Fringe Active Finished

A H B discovered B 1 B C G D E 1 F Undiscovered Queue: F Fringe Active Finished

A H C discovered B 1 C C 1 G D E 1 F Undiscovered Queue: F B Fringe Active Finished

A H G 1 G discovered B 1 C 1 G D E 1 F Undiscovered Queue: F B C Fringe Active Finished

A H 1 A finished B 1 C 1 G D E 1 F get Undiscovered Queue: F B C G Fringe Active Finished

A H 1 A already visited B 1 C 1 G D E 1 F Undiscovered Queue: B C G Fringe Active Finished

A H 1 D discovered B 1 C 1 G D 2 D E 1 F Undiscovered Queue: B C G Fringe Active Finished

A H 1 E discovered B 1 C 1 G E 2 2 D E 1 F Undiscovered Queue: B C G D Fringe Active Finished

A H 1 F finished B 1 C 1 G 2 2 D E 1 F get Undiscovered Queue: B C G D E Fringe Active Finished

A H 1 B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: C G D E Fringe Active Finished

A H 1 A already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: C G D E Fringe Active Finished

A H 1 B finished B 1 C 1 G 2 2 D E 1 F get Undiscovered Queue: C G D E Fringe Active Finished

A H 1 A already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: G D E Fringe Active Finished

A H 1 C finished B 1 C 1 G 2 2 D E 1 F get Undiscovered Queue: G D E Fringe Active Finished

A H 1 A already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: D E Fringe Active Finished

A H 1 E already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: D E Fringe Active Finished

A H 1 G finished B 1 C 1 G 2 2 D E 1 F get Undiscovered Queue: D E Fringe Active Finished

A H 1 E already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: E Fringe Active Finished

A H 1 F already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: E Fringe Active Finished

A H 1 D finished B 1 C 1 G 2 2 D E 1 F get Undiscovered Queue: E Fringe Active Finished

A H 1 D already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: Fringe Active Finished

A H 1 F already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: Fringe Active Finished

A H 1 G already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: Fringe Active Finished

A H 3 H 1 H discovered B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: Fringe Active Finished

A H 3 1 E finished B 1 C 1 G 2 2 D E 1 F get Undiscovered Queue: H Fringe Active Finished

A H 3 1 E already visited B 1 C 1 G 2 2 D E 1 F Undiscovered Queue: Fringe Active Finished

A H 3 1 H finished B 1 C 1 G 2 2 D E 1 F STOP Undiscovered Queue: Fringe Active Finished

A distance from A H 3 1 B 1 C 1 G 2 2 D E 1 F

Breadth First Search - A B C D E F G H I front FIFO Queue

Breadth First Search A - enqueue source node FIFO Queue front A B C D
G H I enqueue source node A front FIFO Queue

Breadth First Search A - dequeue next vertex FIFO Queue front A B C D
G H I dequeue next vertex A front FIFO Queue

Breadth First Search - visit neighbors of A FIFO Queue front A B C D E

Breadth First Search - visit neighbors of A FIFO Queue front A B C D E

Breadth First Search B - A B discovered FIFO Queue front A B C D E F G

Breadth First Search B - A visit neighbors of A FIFO Queue front A B C

Breadth First Search B I - A A I discovered FIFO Queue front A B C D E
G H I A I discovered B I front FIFO Queue

- A A B C D E F G H I A finished with A B I front FIFO Queue

- A A B C D E F G H I A dequeue next vertex B I front FIFO Queue

- A A B C D E F G H I A visit neighbors of B I front FIFO Queue

Breadth First Search I - A A visit neighbors of B FIFO Queue front A B

Breadth First Search I F - A B A F discovered FIFO Queue front A B C D
G H B I A F discovered I F front FIFO Queue

- A A B C D E F G H B I A visit neighbors of B I F front FIFO Queue

Breadth First Search I F - A B A A already discovered FIFO Queue front
G H B I A A already discovered I F front FIFO Queue

Breadth First Search I F - A B A finished with B FIFO Queue front A B
G H B I A finished with B I F front FIFO Queue

Breadth First Search I F - A B A dequeue next vertex FIFO Queue front
G H B I A dequeue next vertex I F front FIFO Queue

Breadth First Search F - A B A visit neighbors of I FIFO Queue front A

Breadth First Search F - A B A visit neighbors of I FIFO Queue front A

Breadth First Search F - A B A A already discovered FIFO Queue front A
G H B I A A already discovered F front FIFO Queue

Breadth First Search F - A B A visit neighbors of I FIFO Queue front A

Breadth First Search F E - A I B A E discovered FIFO Queue front A B C
G H I B I A E discovered F E front FIFO Queue

Breadth First Search F E - A I B A visit neighbors of I FIFO Queue
front FIFO Queue

Breadth First Search F E - A I B A F already discovered FIFO Queue
G H I B I A F already discovered F E front FIFO Queue

Breadth First Search F E - A I B A I finished FIFO Queue front A B C D
G H I B I A I finished F E front FIFO Queue

Breadth First Search F E - A I B A dequeue next vertex FIFO Queue
G H I B I A dequeue next vertex F E front FIFO Queue

Breadth First Search E - A I B A visit neighbors of F FIFO Queue front

Breadth First Search E G - A I B F A G discovered FIFO Queue front A B

Breadth First Search E G - A I B F A F finished FIFO Queue front A B C

Breadth First Search E G - A I B F A dequeue next vertex FIFO Queue
front FIFO Queue

Breadth First Search G - A I B F A visit neighbors of E FIFO Queue
front FIFO Queue

Breadth First Search G - A I B F A E finished FIFO Queue front A B C D

Breadth First Search G - A I B F A dequeue next vertex FIFO Queue
front FIFO Queue

Breadth First Search - A I B F A visit neighbors of G FIFO Queue front

Breadth First Search C - A G I B F A C discovered FIFO Queue front A B

Breadth First Search C - A G I B F A visit neighbors of G FIFO Queue
front FIFO Queue

Breadth First Search C H - A G I B F G A H discovered FIFO Queue front

Breadth First Search C H - A G I B F G A G finished FIFO Queue front A

Breadth First Search C H - A G I B F G A dequeue next vertex
front FIFO Queue

Breadth First Search H - A G I B F G A visit neighbors of C FIFO Queue
front FIFO Queue

Breadth First Search H D - A G C I B F G A D discovered FIFO Queue
front FIFO Queue

Breadth First Search H D - A G C I B F G A C finished FIFO Queue front

Breadth First Search H D - A G C I B F G A get next vertex FIFO Queue
front FIFO Queue

Breadth First Search D - A G C I B F G A visit neighbors of H
front FIFO Queue

Breadth First Search D - A G C I B F G A finished H FIFO Queue front A

Breadth First Search D - A G C I B F G A dequeue next vertex
front FIFO Queue

Breadth First Search - A G C I B F G A visit neighbors of D FIFO Queue
front FIFO Queue

Breadth First Search - A G C I B F G A D finished FIFO Queue front A B

Breadth First Search - A G C I B F G A dequeue next vertex FIFO Queue
front FIFO Queue

Breadth First Search - A G C I B F G A STOP FIFO Queue front A B C D E

9 9 SI 4 4 5 5 6 6 7 7 8 8 2 2 3 3 1 1 SG

Bi-Directional BFS 1 SI 3 4 8 9 10 11 12 13 5 6 7 2 SG

Uniform Cost Search Expands the least cost leaf node first. It is complete, and unlike BFS, is optimal even when operators have different costs. Its space and time complexity are the same as for breadth-first search.

Uniform Cost Search: Example
10 1 5 5 S B G 15 5 C Solution: S S S S C A A C B 15 C A B 15 B 1 5 5 G G 11 G 11 10 Solution is SBG

Depth First Search A depth first search begins at the root node (i.e. Initial node) and works downward to successively deeper levels. An operator is applied to the node to generate the next deeper node in sequence. This process continues until a solution is found or backtracking is forced by reaching a dead end.

Depth-first search 14 Jan 2004 CS Blind Search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth-first search

Depth First Search A A B B C C D D G G E E F F H H J J I I K K O O L L
M M N N

Depth-First Search 1 1 SI 2 2 7 3 7 3 5 5 8 8 4 4 8 Goal 6 6

Evaluation of DFS Since we don’t expand all nodes at a level, space complexity is modest. For branching factor b and depth d, we require bd number of nodes to be stored in memory. This is much better than bd. In some cases, DFS can be faster than BFS. However, the worse case is still O(bd). If you have deep search trees (or infinite – which is quite possible), DFS may end up running off to infinity and may not be able to recover. Thus DFS is neither optimal nor complete.

Depth Limited Search Depth limited search avoids the pitfalls of depth search by imposing a cut-off on the maximum depth of a path. Depth limited search is complete but not optimal. If we choose a depth limit that is too small, then depth limited search is not even complete. The time and space complexity of depth limited search is similar to depth first search.

Depth First Iterative Deepening (DFID)
BF and DF both have exponential time complexity O(b^d) BF is complete but has exponential space complexity DF has linear space complexity but is incomplete Space is often a harder resource constraint than time Can we have an algorithm that Is complete Has linear space complexity, and Has time complexity of O(b^d) DFID by Korf in 1985 DFID Algorithm It involves repeatedly carrying out DFS on the tree, starting with a DFS limited to depth of one, then DFS of depth two, and so on, until a goal is found.

Iterative deepening search l =0
14 Jan 2004 CS Blind Search

Iterative deepening search l =1
14 Jan 2004 CS Blind Search

Iterative deepening search l =2
14 Jan 2004 CS Blind Search

Iterative deepening search l =3
14 Jan 2004 CS Blind Search

Iterative deepening search
Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd For b = 10, d = 5, NDLS = , , ,000 = 111,111 NIDS = , , ,000 = 123,456 Overhead = (123, ,111)/111,111 = 11%

Summary of algorithms Time O(bd+1 ) Space Optimal? Criterion
Breadth First Uniform Cost Depth First Depth Limited Iterative Deepening Complete? Yes No Time O(bd+1 ) Space Optimal? 14 Jan 2004 CS Blind Search

Repeated states Failure to detect repeated states can turn a linear problem into an exponential one!

How to deal with repeated states?
Do not return to the state you just came from. have the expand function refuse to generate any successor that is the same state as the node’s parent. Do not repeat paths with cycles in them. have the expand function refuse to generate any successor of a node that is the same as any of the node’s ancestors. Do not generate any state that was ever generated before.