Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence Lecture No. 6

Similar presentations


Presentation on theme: "Artificial Intelligence Lecture No. 6"— Presentation transcript:

1 Artificial Intelligence Lecture No. 6

2 Summary of Previous Lecture
Different types of Environments IA examples based on Environment Agent types Simple reflex agents Reflex agents with state/model Goal-based agents Utility-based agents

3 Today’s Lecture Problem solving by searching What is Search?
Problem formulation Search Space Definitions Goal-formulation Searching for Solutions Visualize Search Space as a Graphs

4 Problem-Solving Agent
In which we look at how an agent can decide what to do by systematically considering the outcomes of various sequences of actions that it might take. - Stuart Russell & Peter Norvig

5 Problem solving agent A kind of Goal-based agent.
Decide what to do by searching sequences of actions that lead to desirable states.

6 Problem Definition Initial state : starting point
Operator: description of an action State space: all states reachable from the initial state by any sequence action Path: sequence of actions leading from one state to another Goal test: which the agent can apply to a single state description to determine if it is a goal state Path cost function: assign a cost to a path which the sum of the costs of the individual actions along the path.

7 What is Search? Search is the systematic examination of states to find path from the start/root state to the goal state. The set of possible states, together with operators defining their connectivity the search space. The output of a search algorithm is a solution, that is, a path from the initial state to a state that satisfies the goal test. In real life search usually results from a lack of knowledge. In AI too search is merely a offensive instrument with which to attack problems that we can't seem to solve any better way.

8 Search groups Search techniques fall into three groups:
Methods which find any start - goal path, Methods which find the best path, Search methods in the face of opponent.

9 Search An agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known value, and then choosing the best one. This process is called search. A search-algorithm takes a problem as input and returns a solution in the form of an action sequence.

10 Problem formulation What are the possible states of the world relevant for solving the problem? What information is accessible to the agent? How can the agent progress from state to state? Follows goal-formulation. Courtesy: Dr. Franz J. Kurfess

11 Well-defined problems and solutions
A problem is a collection of information that the agent will use to decide what to do. Information needed to define a problem: The initial state that the agent knows itself to be in. The set of possible actions available to the agent. Operator denotes the description of an action in terms of which state will be reached by carrying out the action in a particular state. Also called Successor function S. Given a particular state x, S (x) returns the set of states reachable from x by any single action.

12 State space and a path State space is the set of all states reachable from the initial state by any sequence of actions. Path in the state space is simply any sequence of actions leading from one state to another.

13 Search Space Definitions
Problem formulation Describe a general problem as a search problem Solution Sequence of actions that transitions the world from the initial state to a goal state Solution cost (additive) Sum of the cost of operators Alternative: sum of distances, number of steps, etc. Search Process of looking for a solution Search algorithm takes problem as input and returns solution We are searching through a space of possible states Execution Process of executing sequence of actions (solution)

14 Goal-formulation What is the goal state?
What are important characteristics of the goal state? How does the agent know that it has reached the goal? Are there several possible goal states? Are they equal or are some more preferable? Courtesy: Dr. Franz J. Kurfess

15 Goal We will consider a goal to be a set of world states – just those states in which the goal is satisfied. Actions can be viewed as causing transitions between world states.

16 Looking for Parking Going home; need to find street parking
Formulate Goal: Car is parked Formulate Problem: States: street with parking and car at that street Actions: drive between street segments Find solution: Sequence of street segments, ending with a street with parking

17 Example Problem Start Street Street with Parking

18 Search Example Formulate goal: Be in Bucharest.
Formulate problem: states are cities, operators drive between pairs of cities Find solution: Find a sequence of cities (e.g., Arad, Sibiu, Fagaras, Bucharest) that leads from the current state to a state meeting the goal condition

19 Problem Formulation A search problem is defined by the
Initial state (e.g., Arad) Operators (e.g., Arad -> Zerind, Arad -> Sibiu, etc.) Goal test (e.g., at Bucharest) Solution cost (e.g., path cost)

20 Examples (2) Vacuum World
8 possible world states 3 possible actions: Left/Right/ Suck Goal: clean up all the dirt= state(7) or state(8)

21 Vacuum World States: S1 , S2 , S3 , S4 , S5 , S6 , S7 , S8
Operators: Go Left , Go Right , Suck Goal test: no dirt left in both squares Path Cost: each action costs 1.

22 Example Problems – Eight Puzzle
States: tile locations Initial state: one specific tile configuration Operators: move blank tile left, right, up, or down Goal: tiles are numbered from one to eight around the square Path cost: cost of 1 per move (solution cost same as number of most or path length) Eight Puzzle

23 Single-State problem and Multiple-States problem
World is accessible  agent’s sensors give enough information about which state it is in (so, it knows what each of its action does), then it calculate exactly which state it will be after any sequence of actions. Single-State problem world is inaccessible  agent has limited access to the world state, so it may have no sensors at all. It knows only that initial state is one of the set {1,2,3,4,5,6,7,8}. Multiple-States problem

24 Think of the graph defined as follows:
the nodes denote descriptions of a state of the world, e.g., which blocks are on top of what in a blocks scene, and where the links represent actions that change from one state to the other. A path through such a graph (from a start node to a goal node) is a "plan of action" to achieve some desired goal state from some known starting state. It is this type of graph that is of more general interest in AI.

25 Searching for Solutions Visualize Search Space as a Tree
States are nodes Actions are edges Initial state is root Solution is path from root to goal node Edges sometimes have associated costs States resulting from operator are children

26 Directed graphs A graph is also a set of nodes connected by links but where loops are allowed and a node can have multiple parents. We have two kinds of graphs to deal with: directed graphs, where the links have direction (one-way streets).

27 Undirected graphs undirected graphs where the links go both ways. You can think of an undirected graph as shorthand for a graph with directed links going each way between connected nodes.

28 Searching for solutions: Graphs or trees
The map of all paths within a state-space is a graph of nodes which are connected by links. Now if we trace out all possible paths through the graph, and terminate paths before they return to nodes already visited on that path, we produce a search tree. Like graphs, trees have nodes, but they are linked by branches. The start node is called the root and nodes at the other ends are leaves. Nodes have generations of descendents. The aim of search is not to produce complete physical trees in memory, but rather explore as little of the virtual tree looking for root-goal paths.

29 Search Problem Example (as a tree) (start: Arad, goal: Bucharest.)

30 Summery of Today’s Lecture
Problem solving by searching What is Search? Problem formulation Search Space Definitions Goal-formulation Examples Searching for Solutions Visualize Search Space as a Graphs Directed graphs and Undirected graphs

31 http://www.nltk.org/ NLP

32 Problem solving using search

33 Goal Directed Agent A goal directed agent needs to achieve certain goals. Such an agent selects its actions based on the goal it has. Many problems can be represented as a set of states and a set of rules of how one state is transformed to another.

34 State space Each state is an abstract representation of the agent's environment. It is an abstraction that denotes a configuration of the agent. Initial state : The description of the starting configuration of the agent An action/ operator takes the agent from one state to another state. A state can have a number of successor states. A plan is a sequence of actions.

35 State space A goal is a description of a set of desirable states of the world. Goal states are often specified by a goal test which any goal state must satisfy. Path cost Sum of the costs at each step

36 Problem Formulation Problem formulation means choosing a relevant set of states to consider, and a feasible set of operators for moving from one state to another. Search is the process of considering various possible sequences of operators applied to the initial state, and finding out a sequence which culminates in a goal state.

37 Search Problem A search problem consists of the following:
S: the full set of states s0 : the initial state A:S→S is a set of operators G is the set of final states. Note that G ⊆S

38 Search Problem The search problem is to find a sequence of actions which transforms the agent from the initial state to a goal state g∈G

39 State space representation

40 Representation of search problems
A search problem is represented using a directed graph. The states are represented as nodes. The allowed actions are represented as arcs.

41 Plan & path Plan: The sequence of actions is called a solution plan.
It is a path from the initial state to a goal state. A plan P is a sequence of actions P = {a0, a1, … , aN} which leads to traversing a number of states {s0, s1, … , sN+1∈G}. Path: A sequence of states is called a path. The cost of a path is a positive number. In many cases the path cost is computed by taking the sum of the costs of each action.

42 Searching process The generic searching process can be very simply described in terms of the following steps: Do until a solution is found or the state space is exhausted. Check the current state Execute allowable actions to find the successor states. Pick one of the new states. Check if the new state is a solution state If it is not, the new state becomes the current state and the process is repeated

43 Illustration of a search process

44 Illustration of a search process

45 Illustration of a search process
The grey nodes define the search tree. Usually the search tree is extended one node at a time. The order in which the search tree is extended depends on the search strategy. We can start from a given state and follow the successors, and be able to find solution paths that lead to a goal state.

46 Example problem: Pegs and Disks problem
Consider the following problem. We have 3 pegs and 3 disks. Operators: one may move the topmost disk on any needle to the topmost position to any other needle.

47 Initial state

48 Goal state

49 Sequence of actions

50 Sequence of actions

51 Sequence of actions

52 Sequence of actions

53 Sequence of actions

54 Sequence of actions

55 Sequence of actions

56 8 queens problem The problem is to place 8 queens on a chessboard so that no two queens are in the same row, column or diagonal The picture on the left shows a solution of the 8-queens problem. The picture on the right is not a correct solution, because some of the queens are attacking each other.

57 8 queens problem

58 N queens problem formulation
Deciding the representation of the states Selecting the initial state representation The description of the operators, and the successor states We can formulate the search problem in several different ways for this problem.

59 N queens problem formulation
States: Any arrangement of 0 to 8 queens on the board Initial state: 0 queens on the board Successor function: Add a queen in any square Goal test: 8 queens on the board, none are attacked

60 N queens problem formulation 1
The initial state has 64 successors. Each of the states at the next level have 63 successors, and so on. We can restrict the search tree somewhat by considering only those successors where no queen is attacking each other. To do that we have to check the new queen against all existing queens on the board.

61 N queens problem formulation 1

62 N queens problem formulation 2
States: Any arrangement of 8 queens on the board Initial state: All queens are at column 1 Successor function: Change the position of any one queen Goal test: 8 queens on the board, none are attacked If we consider moving the queen at column 1, it may move to any of the seven remaining columns.

63 N queens problem formulation 2

64 N queens problem formulation 3
States: Any arrangement of k queens in the first k rows such that none are attacked Initial state: 0 queens on the board Successor function: Add a queen to the (k+1)th row so that none are attacked. Goal test : 8 queens on the board, none are attacked

65 N queens problem formulation 3

66 Example: 8 puzzle

67 Example: 8 puzzle In the 8-puzzle problem we have a 3×3 square board and 8 numbered tiles. The board has one blank position. Bocks can be slid to adjacent blank positions.

68 Example: 8 puzzle The state space representation for this problem is summarized below: States: A state is a description of each of the eight tiles in each location that it can occupy. Operators/Action: The blank moves left, right, up or down Goal Test: The current state matches a certain state (e.g. one of the ones shown on previous slide) Path Cost: Each move of the blank costs 1

69 8-puzzle partial state space

70 Example: tic-tac-toe Another example we will consider now is the game of tic-tac-toe. This is a game that involves two players who play alternately. Player one puts a X in an empty position. Player 2 places an O in an unoccupied position. The player who can first get three of his symbols in the same row, column or diagonal wins.

71 A portion of the state space of tic-tac- toe

72 Explicit vs Implicit state space
The state space may be explicitly represented by a graph. More typically the state space can be implicitly represented and generated when required. To generate the state space implicitly, the agent needs to know the initial state the operators and a description of the effects of the operators An operator is a function which "expands" a node. Computes the successor node(s).

73 Search through a state space
Input Set of states Operators ( costs) Start State Goal State Output Path: Start => a state satisfying goal test


Download ppt "Artificial Intelligence Lecture No. 6"

Similar presentations


Ads by Google