8/29. Administrative.. Bouncing mails –Qle01; jmussem; rbalakr2 Send me a working email address for class list Blog posting issues Recitation session.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Uninformed search strategies
3/3 Factoid for the day: “Most people have more than the average number of feet” & eyes & ears & noses.
Solving Problem by Searching
Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)
Listening non-stop for 150min per week, for 16 weeks –4000$ (your tuition).. Re-viewing all the lecture videos on Youtube –100000$ (in lost girl friends/boy.
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CMSC 471 Spring 2014 Class #4 Thu 2/6/14 Uninformed Search Professor Marie desJardins,
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CS 380: Artificial Intelligence Lecture #3 William Regli.
 Lisp recitation after class in the same room.
CPSC 322 Introduction to Artificial Intelligence October 27, 2004.
A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment What action next? The $$$$$$ Question.
Review: Search problem formulation
9/9. Num iterations: (d+1) Asymptotic ratio of # nodes expanded by IDDFS vs DFS (b+1)/ (b-1) (approximates to 1 when b is large)
UnInformed Search What to do when you don’t know anything.
Administrivia/Announcements Project 0 will be taken until Friday 4:30pm –If you don’t submit in the class, you submit to the dept office and ask them.
3/3 Factoid for the day: “Most people have more than the average number of feet” & eyes & ears & noses.
9/5 9/5: (today) Lisp Assmt due 9/6: 3:30pm: Lisp Recitation [Lei] 9/7:~6pm: HW/Class recitation [Will] 9/12: HW1 Due.
Problem Solving and Search in AI Heuristic Search
9/10  Name plates for everyone!. Blog qn. on Dijkstra Algorithm.. What is the difference between Uniform Cost Search and Dijkstra algorithm? Given the.
Class of 28 th August. Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and.
Informed Search Idea: be smart about what paths to try.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Informed search algorithms
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Advanced Artificial Intelligence Lecture 2: Search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Thinking Cap 1 Declared a resounding success! 16 Unique students.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Introduction to Artificial Intelligence Class 1 Planning & Search Henry Kautz Winter 2007.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
Solving problems by searching A I C h a p t e r 3.
For Monday Read chapter 4 exercise 1 No homework.
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Lecture 3: Uninformed Search
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Last time: Problem-Solving
Solving problems by searching
What to do when you don’t know anything know nothing
EA C461 – Artificial Intelligence
Searching for Solutions
CSE 473 University of Washington
Chap 4: Searching Techniques
Informed Search Idea: be smart about what paths to try.
Solving Problems by Searching
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

8/29

Administrative.. Bouncing mails –Qle01; jmussem; rbalakr2 Send me a working address for class list Blog posting issues Recitation session for Homework 1? –Mail sent by Will Cushing (respond to him) –Show of hands…

--Decision Theoretic Planning --Sequential Decision Problems..certain inalienable rights—life, liberty and pursuit of ?Money ?Daytime TV ?Happiness (utility)

Discounting The decision-theoretic agent often needs to assess the utility of sequences of states (also called behaviors). –One technical problem is “How do keep the utility of an infinite sequence finite? –A closely related real problem is how do we combine the utility of a future state with that of a current state (how does 15$ tomorrow compare with 5000$ when you retire?) –The way both are handled is to have a discount factor r (0<r<1) and multiply the utility of n th state by r n r 0 U(s o )+ r 1 U(s 1 )+…….+ r n U(s n )+ Guaranteed to converge since power series converge for 0<r<n –r is set by the individual agents based on how they think future rewards stack up to the current ones An agent that expects to live longer may consider a larger r than one that expects to live shorter…

Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models Search Blind, Informed Planning Inference Logical resolution Bayesian inference How the course topics stack up…

Learning Dimensions: What can be learned? --Any of the boxes representing the agent’s knowledge --action description, effect probabilities, causal relations in the world (and the probabilities of causation), utility models (sort of through credit assignment), sensor data interpretation models What feedback is available? --Supervised, unsupervised, “reinforcement” learning --Credit assignment problem What prior knowledge is available? -- “Tabularasa” (agent’s head is a blank slate) or pre-existing knowledge

Problem Solving Agents (Search-based Agents)

The important difference from the graph-search scenario you learned in CSE 310 is that you want to keep the graph implicit rather than explicit (i.e., generate only that part of the graph that is absolutely needed to get the optimal path)  VERY important since for most problems, the graphs are humongous..

What happens when the domain Is inaccessible?

Given a state space of size n the single-state problem searches for a path in the graph of size n the multiple-state problem searches for a path in a graph of size 2 n the contingency problem searches for a sub-graph in a graph of size 2 n Utility of eyes (sensors) is reflected in the size of the effective search space! In general, a subgraph rather than a tree (loops may be needed consider closing a faulty door ) 2 n is the EVILthat every CS student’s nightmares are made of

Search in Multi-state (inaccessible) version Set of states is Called a “Belief State” So we are searching in the space of belief states You search in this Space even if your Init state is known But actions are Non-deterministic Sensing reduces State Uncertainty

?? General Search

All search algorithms must do goal-test only when the node is picked up for expansion

Search algorithms differ based on the specific queuing function they use

Breadth first search on a uniform tree of b=10 Assume 1000nodes expanded/sec 100bytes/node

Qn: Is there a way of getting linear memory search that is complete and optimal?

The search is “complete” now (since there is finite space to be explored). But still inoptimal.

8/31

All search algorithms must do goal-test only when the node is picked up for expansion

Search algorithms differ based on the specific queuing function they use We typically analyze properties of search algorithms on uniform trees --with uniform branching factor b and goal depth d (tree itself may go to depth d t )

IDDFS: Review

A B C D G DFS: BFS: IDDFS: A,B,G A,B,C,D,G (A), (A, B, G) Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space.

A B C D G DFS: BFS: IDDFS: A,B,G A,B,A,B,A,B,A,B,A,B (A), (A, B, G) Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space. Search on undirected graphs or directed graphs with cycles… Cycles galore…

Graph (instead of tree) Search: Handling repeated nodes Main points: --repeated expansions is a bigger issue for DFS than for BFS or IDDFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential --Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)

A B C D G Uniform Cost Search No:A (0) N1:B(1)N2:G(9) N3:C(2)N4:D(3)N5:G(5) Completeness? Optimality? if d < d’, then paths with d distance explored before those with d’ Branch & Bound argument (as long as all op costs are +ve) Efficiency? (as bad as blind search..) A B C D G Bait & Switch Graph Notation: C(n,n’) cost of the edge between n and n’ g(n) distance of n from root dist(n,n’’) shortest distance between n and n’’

Visualizing Breadth-First & Uniform Cost Search Breadth-First goes level by level This is also a proof of optimality…

Proof of Optimality of Uniform search Proof of optimality: Let N be the goal node we output. Suppose there is another goal node N’ We want to prove that g(N’) >= g(N) Suppose this is not true. i.e. g(N’) < g(N) --Assumption A1 When N was picked up for expansion, Either N’ itself, or some ancestor of N’, Say N’’ must have been on the search queue If we picked N instead of N’’ for expansion, It was because g(N) <= g(N’’) ---Fact f1 But g(N’) = g(N’’) + dist(N’’,N’) So g(N’) >= g(N’’) So from f1, we have g(N) <= g(N’) But this contradicts our assumption A1 No N N’ N’’ Holds only because dist(N’’,N’) >= 0 This will hold if every operator has +ve cost

“Informing” Uniform search… A B C D G Bait & Switch Graph No:A (0) N1:B(.1)N2:G(9) N3:C(.2)N4:D(.3)N5:G(25.3) Would be nice if we could tell that N2 is better than N1 --Need to take not just the distance until now, but also distance to goal --Computing true distance to goal is as hard as the full search --So, try “bounds” h(n) prioritize nodes in terms of f(n) = g(n) +h(n) two bounds: h1(n) <= h*(n) <= h2(n) Which guarantees optimality? --h1(n) <= h2(n) <= h*(n) Which is better function? Admissibility Informedness

“Informing” Uniform search… A B C D G Bait & Switch Graph No:A (0) N1:B(.1)N2:G(9) N3:C(.2)N4:D(.3)N5:G(25.3) Would be nice if we could tell that N2 is better than N1 --Need to take not just the distance until now, but also distance to goal --Computing true distance to goal is as hard as the full search --So, try “bounds” h(n) prioritize nodes in terms of f(n) = g(n) +h(n) two bounds: h1(n) <= h*(n) <= h2(n) Which guarantees optimality? --h1(n) <= h2(n) <= h*(n) Which is better function? Admissibility Informedness