Artificial Intelligence Representation and Search Techniques

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Heuristic Search techniques
Informed search algorithms
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
CS 484 – Artificial Intelligence1 Announcements Department Picnic: today, after class Lab 0 due today Homework 2 due Tuesday, 9/18 Lab 1 due Thursday,
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Search in AI.
Best-First Search: Agendas
Problem Solving by Searching
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
Review: Search problem formulation
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Using Search in Problem Solving
Problem Solving and Search in AI Heuristic Search
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture–2,3: A* 3 rd and 5 th January, 2012.
Using Search in Problem Solving
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Informed Search Strategies
WAES 3308 Numerical Methods for AI
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Monotonicity 13 th Jan, 2011.
Various Heuristic Searches Lecture Module 5. Heuristic Search Heuristics are criteria for deciding which among several alternatives be the most effective.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed (Heuristic) Search
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
A RTIFICIAL I NTELLIGENCE UNIT : 2 Search Techniques.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Advanced Artificial Intelligence Lecture 2: Search.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Searching for Solutions
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 13– Search 17 th August, 2010.
Fahiem Bacchus © 2005 University of Toronto 1 CSC384: Intro to Artificial Intelligence Search II ● Announcements.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 17– Theorems in A* (admissibility, Better performance.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3: Search, A*
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
BEST FIRST SEARCH -OR Graph -A* Search -Agenda Search CSE 402
CS621: Artificial Intelligence
CS621: Artificial Intelligence
Artificial Intelligence Chapter 9 Heuristic Search
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
UNINFORMED SEARCH -BFS -DFS -DFIS - Bidirectional
State-Space Searches.
Heuristic Search Generate and Test Hill Climbing Best First Search
State-Space Searches.
HW 1: Warmup Missionaries and Cannibals
CS621: Artificial Intelligence
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Lecture 4: Tree Search Strategies
Presentation transcript:

Artificial Intelligence Representation and Search Techniques L. Manevitz copyright 2005 all rights reserved L. Manevitz Lecture 2

Goals of Lecture Representing Problems : Production systems : Various Issues and Considerations. Production Systems. Production systems : State Space. Goals. Transformation Rules. Control (Search Techniques). copyright 2005 all rights reserved L. Manevitz Lecture 2

Elements of Production Systems Representation : State Space. Goal States. Initial States. Transformation Rules. Search Algorithms : Uninformed. “Heuristic”. copyright 2005 all rights reserved L. Manevitz Lecture 2

Examples of Problems “Toy” Problems : Water jug. Cannibals. 8 – Queens. 8 Puzzle. copyright 2005 all rights reserved L. Manevitz Lecture 2

Examples of Problems cont. “Real” Problems : Schedules. Traveling Salesman. Robot navigation. Language Analysis (Parsers, Grammars). VLSI design. copyright 2005 all rights reserved L. Manevitz Lecture 2

Points to Consider when Representing Problems Decomposable ? Can partial steps be ignored or undone ? Predictable ? A B C Theorem proving vs. chess Bridge copyright 2005 all rights reserved L. Manevitz Lecture 2

Points to Consider when Representing Problems cont. Is “good” solution easily recognizable ? Is Knowledge Base consistent ? How much Knowledge is needed ? Stand–alone vs. Inter–active. Traveling Salesman Big Data Base and limitations of logic Chess, Newspaper-Understanding copyright 2005 all rights reserved L. Manevitz Lecture 2

Control Rules DATA copyright 2005 all rights reserved L. Manevitz Lecture 2

Issues In Representing Problem Choice of representation of Data Base. Specify initial states Specify goal states Appropriate Rules. Issues: Assumptions in problem How general How much work pre-computed and put into rules Control (later) copyright 2005 all rights reserved L. Manevitz Lecture 2

Water Jug Problem <x, y> x, y real numbers x, y between 0 and 4 3 liters 4 liters <x, y> x, y real numbers x, y between 0 and 4 <x, y> x integers between 0 and 4 copyright 2005 all rights reserved L. Manevitz Lecture 2

Water Jugs cont. Rules : <x,y> x<4  <4,y> ( Fill 4 liters ) <x,y> y<3  <x,3> ( Fill 3 liters ) <x,y>  <o,y> ( Dump 4 liters ) <x,y>  <x,0> ( Dump 3 liters ) <x,y> | x+y >= 4  <4,y-(4-x)> <x,y> | x+y >= 3  <x-(3-y),y> <x,y> | x+y <= 4  <x+y,0> <x,y> | x+y <= 3  <0,x+y> copyright 2005 all rights reserved L. Manevitz Lecture 2

Example no.1 CHESS : R1 – If pawn not blocked then move it one space forward. R2 – If pawn in original position and not blocked for two spaces move it two spaces. Etc. copyright 2005 all rights reserved L. Manevitz Lecture 2

Example no.2 SPEECH: R1 – If input not analyzed try and identify phonemes. R2 – Take some possible syllables and try and form words. Etc. copyright 2005 all rights reserved L. Manevitz Lecture 2

Production DATA  initial database. Until DATA satisfies termination condition do: begin: Select some rule, R, in the set of rules that can be applied to DATA. DATA  result of applying R to DATA. end. copyright 2005 all rights reserved L. Manevitz Lecture 2

Problem Description Define State Space containing all possible configurations of relevant objects. Specify some states as initial states. Specify some states as goal states. Specify Rules copyright 2005 all rights reserved L. Manevitz Lecture 2

Search Through State-Space Goal state Initial state copyright 2005 all rights reserved L. Manevitz Lecture 2

Control Strategies What do we want? Cause motion Be systematic Examples Breadth first Depth first Back Tracking Hill Climbing Best First copyright 2005 all rights reserved L. Manevitz Lecture 2

Control via Search Techniques “Uninformed” : Breadth – First. Depth – First. Backtracking. “Informed” – Heuristic : Hill Climbing. Best First, A*. copyright 2005 all rights reserved L. Manevitz Lecture 2

Costs of Production System Total Cost Cost Rules Control Level of “Informedness” copyright 2005 all rights reserved L. Manevitz Lecture 2

Breadth First cont. copyright 2005 all rights reserved L. Manevitz Lecture 2

Breadth First b = branching factor d=depth 1 + b + b + … + b Time Complexity – O(b ) Space Complexity – O(b ) 2 d d copyright 2005 all rights reserved L. Manevitz Lecture 2

Breadth First fix exponents Depth Nodes Time Memory 2 111 .1 sec 1 kb 4 11,111 11 sec 1 mega 6 10**6 18 min 111 mega 8 10**8 31 hours 11 giga 10 10**10 128 days 1 tera 12 10**12 35 years 111 tera 14 10**14 3500 years 11,111 tera copyright 2005 all rights reserved L. Manevitz Lecture 2

Depth First cont. copyright 2005 all rights reserved L. Manevitz Lecture 2

Depth First (some adjustments needed for depth bound) DEPTH FIRST (INITIAL DATA) DATA  INITIAL DATA IF DATA = GOAL THEN EXIT WITH NULL RULES  APPRULES (DATA) IF RULES = NULL EXIT WITH FAILURE R  FIRST (RULES) DATA  R (DATA) IF DATA AT GOAL EXIT WITH R PATH  DEPTH FIRST (DATA) IF PATH = FAILURE EXIT WITH FAILURE EXIT WITH R^PATH copyright 2005 all rights reserved L. Manevitz Lecture 2

Backtracking cont. copyright 2005 all rights reserved L. Manevitz Lecture 2

Backtracking Algorithm BACKTRACK (DATA) IF TERMINAL (DATA) RETURN NULL IF DEADEND (DATA) RETURN FAIL RULES  APPRULES (DATA) LOOP : IF RULES = NULL RETURN FAIL R  BEST (RULES,DATA) RULES  RULES – {R} NEWDATA  R (DATA) PATH  BACKTRACK (NEWDATA) IF PATH = FAIL THEN GO TO LOOP RETURN R^PATH copyright 2005 all rights reserved L. Manevitz Lecture 2

Backtracking Works like Depth First. Stores only last path. Disadvantages : May never terminate – New nonterminal database always generates. Cycle. However can apply heuristics to choose – Best rule. Better heuristics – less backtracking. copyright 2005 all rights reserved L. Manevitz Lecture 2

Backtracking cont. Try a rule – if not successful go back and try a different one. Example – try rules in this order: L, U, R, D. Back up if – Repeat position on path – back to initial / state. Whenever have already applied # of rules – depth bound. When no more rules can be found. copyright 2005 all rights reserved L. Manevitz Lecture 2

Backtracking1 Algorithm (checking for loops) BACKTRACK1 (DATALIST) DATA  FIRST (DATALIST) IF MEMBER (DATA,TAIL(DATALIST)) RETURN FAIL IF TERMINAL (DATA) RETURN NULL IF DEADEND (DATA) RETURN FAIL IF LENGTH (DATALIST) > BOUND RETURN FAIL RULES  APPRULES (DATA) LOOP : IF RULES = NULL RETURN FAIL R  FIRST (RULES) RULES RULES -R RDATA  R (DATA) RDATALIST  CONS (RDATA,DATALIST) PATH  BACKTRACK1 (RDATALIST) IF PATH = FAIL THEN GO TO LOOP RETURN CONS (R,PATH) copyright 2005 all rights reserved L. Manevitz Lecture 2

Search Tree R1 R2 R1 R3 R3 R1 R5 R1 R2 R3 R5 R2 R3 R2 R1 R5 goal copyright 2005 all rights reserved L. Manevitz Lecture 2

8 Puzzle Problem : Data Base : 3x3 Matrix or a Vector with length 9. 1 3 2 5 4 7 6 8 1 3 2 5 4 7 6 8 Problem : Data Base : 3x3 Matrix or a Vector with length 9. Rules : Precondition : Move Blank Left Not at extreme left Move Blank Right Not at extreme right Move Blank Up Not at top row Move Blank Down Not at bottom row copyright 2005 all rights reserved L. Manevitz Lecture 2

8 Puzzle cont. Initial State : Goal State : Move Blank Up 6 4 5 3 1 8 7 2 Initial State : Move Blank Up Move Blank Left Move Blank Down Move Blank Right Goal State : 1 3 2 5 4 7 6 8 copyright 2005 all rights reserved L. Manevitz Lecture 2

Hill Climbing Use heuristic function as measure of how far off the number of tiles out of place. Choose rule giving best increase in function. copyright 2005 all rights reserved L. Manevitz Lecture 2

Example -4 -3 -3 6 4 5 3 1 8 7 2 U 6 4 5 3 1 8 7 2 U 6 4 5 3 1 8 7 2 L 6 4 5 3 1 8 7 2 -2 -1 D R copyright 2005 all rights reserved L. Manevitz Lecture 2

Hill Climbing cont. Problems : Local Maxima – Initial Goal Plateaus. Ridge. 6 4 3 5 1 7 8 2 6 4 5 3 1 7 8 2 copyright 2005 all rights reserved L. Manevitz Lecture 2

Control Data Base (state) Data Base Data Base Data Base R5? R2? R3? copyright 2005 all rights reserved L. Manevitz Lecture 2

Graphs - Digraphs Graph – Nodes and Edges. Digraph – Nodes and directed arcs. Tree – each Node has one parent : Root – no parent. Leaf – no successors. Path – n1…nk . copyright 2005 all rights reserved L. Manevitz Lecture 2

Graphs – Digraphs cont. Implicit vs. Explicit Graph generally, make explicit sub-graph of implicit graph. Implicit Graph Explicit Graph copyright 2005 all rights reserved L. Manevitz Lecture 2

Graph Search – Data Structures 2 List of “discovered nodes” Open (not yet “expanded”) Closed (already “expanded”) Graph Pointers on Graph (like “Hansel and Gretel”) The graph grows as nodes are expanded Explicit versus Implicit Graph copyright 2005 all rights reserved L. Manevitz Lecture 2

Graph Search G  s ; Open  s ; Closed  NULL LOOP: EXIT WITH FAILURE IF Open=NULL. Take first node n from Open , Add to Closed. If n is goal exit with SUCCESS. (solution obtained by pointer path from n to s). Expand n : generate M set of Successors + add to G as successors to n. copyright 2005 all rights reserved L. Manevitz Lecture 2

Graph Search For each m from M : Reorder Open. Go to LOOP. If m is new add to Open and pointer back to n. If m is already on Open, see if to redirect pointer to n. If m already on Closed, see if to redirect pointer to n. Then check all descendants as well. Reorder Open. Go to LOOP. copyright 2005 all rights reserved L. Manevitz Lecture 2

Example A node with not yet checked successors A node with checked successors copyright 2005 all rights reserved L. Manevitz Lecture 2

Example cont. The changes in the graph are : copyright 2005 all rights reserved L. Manevitz Lecture 2

Disadvantage of Graphs Have to verify that a new node hasn’t appeared before (expensive). If don’t verify – then redundant successor computations. copyright 2005 all rights reserved L. Manevitz Lecture 2

Ordering of Nodes (in Graph Search) Descending Order (expand deepest first) – Depth First Search (with cut-off). Ascending Order – Breadth First Search. Heuristic Best First - h(n) = g*(n) + d*(n) A* Algorithm copyright 2005 all rights reserved L. Manevitz Lecture 2

Tree Search Form a Queue consisting of root node Until Queue = Null or Goal achieved See if 1st element is goal. Do nothing if yes If not, remove element from queue and add its children in back of queue (breadth-first) in front of queue (depth-first) re-sort queue (best-first) front of queue (sorted by estimate) hill-climbing copyright 2005 all rights reserved L. Manevitz Lecture 2

A* Algorithm Two Lists – Parent List. Heuristic Function Open. Closed. Parent List. Heuristic Function f*(n) = g*(n) + h*(n) (cost of solution constrained through n) copyright 2005 all rights reserved L. Manevitz Lecture 2

A* Algorithm cont. Open = {s}; g*(s) = 0; f*(s) = h*(s) Closed = NULL Open = NULL  Return Failure Bestnode  Best (Open) Open  Open – {Bestnode} Goal (Bestnode)  Return Solution Successors  Successor (Bestnode) copyright 2005 all rights reserved L. Manevitz Lecture 2

A* Algorithm cont. For each S ∈ Successors Do: Parent (S)  Bestnode g*(S) = g*(Bestnode) + c(Bestnode,S) Does S∈Open ? (i.e. identify with OLD) Add OLD to Children (Bestnode) If g*(S) < g*(OLD) g*(OLD)  g*(S) Parent (OLD)  Bestnode f*(OLD)  g*(OLD) + h*(OLD) copyright 2005 all rights reserved L. Manevitz Lecture 2

A* Algorithm cont. Does S∈Closed ? No Add S to Children (Bestnode) Yes (identified with OLD) Add OLD to Children (Bestnode) If g*(S) < g*(OLD) g*(OLD)  g*(S) Parent (OLD)  Bestnode f*(OLD)  g*(OLD) + h*(OLD) copyright 2005 all rights reserved L. Manevitz Lecture 2

A* Algorithm cont. Propagate change downwards Do Depth First traversal from OLD Terminate if g*(node) <= path from OLD or w/o successors or on Open Otherwise change g*(node) f*(node) change Parent (node) copyright 2005 all rights reserved L. Manevitz Lecture 2

Optimality of A* Heuristics: f(x) = g(x) +h(x) Here g(x) is estimate of g*(x) the actual cost to get to x from s. h(x) is estimate of h*(x) the miniml cost to get to any goal from x. Choose g(x) to be cost in EXPLICIT GRAPH Choose h(x) such that h(x) <= h*(x) copyright 2005 all rights reserved L. Manevitz Lecture 2

In such a circumstance A* returns optimal path. Examples: h(x) = 0 For 8 puzzle h(x) = number of tiles in wrong position For 8 puzzle h(x) = sum of distances each tile is from home sequence scores P(x) + 3 S(x) copyright 2005 all rights reserved L. Manevitz Lecture 2

Proof of Optimality ON Blackboard. Main points: If doesn’t terminate, eventually all points in open have very large f value since they will have large g value. But always a point in OPEN on optimal path; thus f value bounded. Moreover just prior to termination, must choose a node with small f value; so cant terminate erroneously. copyright 2005 all rights reserved L. Manevitz Lecture 2

Proof (in more detail) Assuming there is a solution and an optimal one: Let (s = n0, n1, n2, …, nk = goal) be an optimal path. Fact 1: There is always one element on the optimal path on OPEN before success, for which all predecessors in this path are in CLOSED. Proof: Initially s is on OPEN, then its children; if no goal then let n be first one on path with this condition. copyright 2005 all rights reserved L. Manevitz Lecture 2

f*(n) = g*(n) + h*(n) = g (n) +h*(n) < g(n)+h(n) = f (n) = f (s). Fact 2: For an element on optimal path in OPEN whose predecessors on path are in CLOSED, f*(n) = g*(n) + h*(n) = g (n) +h*(n) < g(n)+h(n) = f (n) = f (s). So always an element in OPEN with value smaller than true optimal cost from start (s). copyright 2005 all rights reserved L. Manevitz Lecture 2

Fact 3: Each step in path takes at least an e >0. So if node has shortest path to it of length d, g*(n) > de. So, if it does not terminate, eventually all nodes on OPEN have arbitrarily large g*(n); hence f*(n) Hence eventually expands all nodes of each length (like breadth first). copyright 2005 all rights reserved L. Manevitz Lecture 2

Fact 2 + Fact 3 means A* finds a solution. It is optimal solution since A* terminates either because OPEN is empty (FAILURE) or finds a Goal. By Fact 1, found a goal No non-optimal path: Otherwise, last choice of node from OPEN with f(n) =g(n) > f*(s). But by Fact 1, there was a better choice than n on optimal path and would not have chosen n. QED copyright 2005 all rights reserved L. Manevitz Lecture 2

AND/OR GRAPHS Hypergraphs. Hyperarcs connects node with SET of nodes. Connectors are 1-ary, 2-ary and so on. copyright 2005 all rights reserved L. Manevitz Lecture 2

Solution subgraph G’ of G from node n to N terminals If n in N, G’ is just n If n has outgoing connector to set of nodes a, b, c … such that there is solution graph from each of a, b ,c separately to N; then G’ consists of n, connector, a, b, c, … and each of the solution graphs from a, b,c… Otherwise no solution graph exists copyright 2005 all rights reserved L. Manevitz Lecture 2

Costs Include cost of connector. Then cost from node n to N, k(n,N) is defined recursively by if n in N k(n,N) = 0 n has connector to a, b, c … in solution graph with cost c k(n,N) = c + k(a, N) + k(b, N) + k (c, N) + … copyright 2005 all rights reserved L. Manevitz Lecture 2

Create a list called CLOSED = Null LOOP: if OPEN = Null, exit FAILURE Create a search graph G, consisting solely of start node, s. Put s on list OPEN Create a list called CLOSED = Null LOOP: if OPEN = Null, exit FAILURE Move 1st node of OPEN, n, to CLOSED If n GOAL exit with solution via pointers from n to s. (pointers placed later) Let M be set of successors of n, place in G Make a pointer to n from members of M not already in G. Add these members to OPEN. For members of M already on OPEN decide if to change pointer to n. For members of M already on CLOSED Decide if to change pointer to n Decide for each of its descendents in G whether to change pointer Reorder OPEN GO LOOP copyright 2005 all rights reserved L. Manevitz Lecture 2