 Lisp recitation after class in the same room.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Heuristic Search techniques
Planning with Non-Deterministic Uncertainty (Where failure is not an option) R&N: Chap. 12, Sect (+ Chap. 10, Sect 10.7)
Artificial Intelligence CS482, CS682, MW 1 – 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis,
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Part2 AI as Representation and Search
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
 The amount of time it takes a computer to solve a particular problem depends on:  The hardware capabilities of the computer  The efficiency of the.
Best-First Search: Agendas
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
CS 380: Artificial Intelligence Lecture #3 William Regli.
A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment What action next? The $$$$$$ Question.
Search: Representation and General Search Procedure CPSC 322 – Search 1 January 12, 2011 Textbook § 3.0 –
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Artificial Intelligence
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
8/29. Administrative.. Bouncing mails –Qle01; jmussem; rbalakr2 Send me a working address for class list Blog posting issues Recitation session.
Administrivia/Announcements Project 0 will be taken until Friday 4:30pm –If you don’t submit in the class, you submit to the dept office and ask them.
Structures and Strategies for State Space Search
Using Search in Problem Solving
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Class of 28 th August. Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and.
Solving problems by searching This Lecture Read Chapters 3.1 to 3.4 Next Lecture Read Chapter 3.5 to 3.7 (Please read lecture topic material before and.
3.0 State Space Representation of Problems 3.1 Graphs 3.2 Formulating Search Problems 3.3 The 8-Puzzle as an example 3.4 State Space Representation using.

Planning with Non-Deterministic Uncertainty. Recap Uncertainty is inherent in systems that act in the real world Last lecture: reacting to unmodeled disturbances.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
Advanced Artificial Intelligence Lecture 2: Search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Robotics Club: 5:30 this evening
Thinking Cap 1 Declared a resounding success! 16 Unique students.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Artificial Intelligence Lecture No. 6 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Solving problems by searching A I C h a p t e r 3.
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
For Monday Read chapter 4 exercise 1 No homework.
Lecture 3: Uninformed Search
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Problem Solving by Searching
Artificial Intelligence
Solving problems by searching
Solving problems by searching
Artificial Intelligence
Solving problems by searching
Solving problems by searching
Presentation transcript:

 Lisp recitation after class in the same room

Time for Buyer’s Remorse? Final class tally: –Total 46 (Room Capacity) –CSE 471: 28 [72%; 1 soph; 3 junior; 24 senior] –CSE 598: 18 [28%; 1 PhD, 13 MS, 4 MCS ] This is one of the most exciting courses in the department! Unlike other courses that form the basis of a field of study, this course is (sort of) at the top of the food chain, so the concepts that we learnt in various other fields are applied here to solve practical problems and create systems that are truly useful. --Unedited comment of a student from Spring 2009 class

(Model-based reflex agents) How do we write agent programs for these?

This one already assumes that the “sensors  features” mapping has been done! Even basic survival needs state information..

EXPLICIT MODELS OF THE ENVIRONMENT --Blackbox models --Factored models  Logical models  Probabilistic models (aka Model-based Reflex Agents) State Estimation

It is not always obvious what action to do now given a set of goals You woke up in the morning. You want to attend a class. What should your action be?  Search (Find a path from the current state to goal state; execute the first op)  Planning (does the same for structured—non-blackbox state models) State Estimation Search/ Planning

Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models Search Blind, Informed Planning Inference Logical resolution Bayesian inference How the course topics stack up…

Agent Classification in Terms of State Representations TypeState representationFocus AtomicStates are indivisible; No internal structure Search on atomic states; Propositional (aka Factored) States are made of state variables that take values (Propositional or Multi- valued or Continuous) Search+inference in logical (prop logic) and probabilistic (bayes nets) representations RelationalStates describe the objects in the world and their inter-relations Search+Inference in predicate logic (or relational prob. Models) First-order+functions over objectsSearch+Inference in first order logic (or first order probabilistic models)

Illustration with Vacuum World Atomic: S1, S2…. S8 Each state is seen as an indivisible snapshot All Actions are SXS matrices.. If you add a second roomba the state space doubles If you want to consider noisiness Of the rooms, the representation Quadruples.. Propositional/Factored: States made up of 3 state variables Dirt-in-left-room T/F Dirt-in-right-room T/F Roomba-in-room L/R Each state is an assignment of Values to state variables 2 3 Different states Actions can just mention the variables they affect Note that the representation is compact (logarithmic in the size of the state space) If you add a second roomba, the Representation increases by just one More state variable. If you want to consider “noisiness” of rooms, we need two variables, one for Each room Relational: World made of objects: Roomba; L-room, R-room, dirt Relations: In (, ); dirty( ) If you add a second roomba, or more rooms, only the objects increase. If you want to consider noisiness, you just need to add one other relation

Problem Solving Agents (Search-based Agents)

Atomic or

Simple goal: Both rooms should be clean.

What happens when the domain Is inaccessible?

Search in Multi-state (inaccessible) version Set of states is Called a “Belief State” So we are searching in the space of belief states Notice that actions can sometimes Reduce state- uncertainty Sensing reduces State Uncertainty Space of belief- states is exponentially larger than space of states. If you throw in likelihood of states in a belief state the resulting state-space is infinite!

Will we really need to handle multiple-state problems? Can’t we just buy better cameras? so our agents can always tell what state they are in? It is not just a question of having good pair or eyes.. Otherwise, why do malls have the maps of the malls with “here you are” annotation in the map? –The problem of localizing yourself in a map is a non-trivial one..

If we can solve problems without sensors, then why have sensing?

State-spaces with Non-deterministic actions correspond to hyper-graphs s1s2 s3 s4 a1a2 s5 a1 a3 Solution: If in s4 do a2 if in s2 do a3 if in s2 do a1 But can be made graphs in The belief space… {S2} a3 {S1,S3,S4} {S5} a1 {S1,S3,S4,S2} a2

Medicate without killing.. A healthy (and alive) person accidentally walked into Springfield nuclear plant and got irradiated which may or may not have given her a disease D. The medication M will cure her of D if she has it; otherwise, it will kill her There is a test T which when done on patients with disese D, turns their tongues red R You can observe with Look sensors to see if the tongue is pink or not We want to cure the patient without killing her.. (D,A) (~D,A) (~D,~A) Medicate (D,A,R) (~D,A,~R) test (D,A,R) (~D,A,~R) Is Tongue Red? yn (~D,A,R) Medicate Sensing “partitions” belief state (A) Radiate Non-det Actions Are normal Edges in Belief-space (but hyper Edges in The original State space

Unknown State Space When you buy Roomba does it have the “layout” of your home? –Fat chance! For 200$, they aren’t going to customize it to everyone’s place! When map is not given, the robot needs to both “learn” the map, and “achieve the goal” –Integrates search/planning and learning Exploration/Exploitation tradeoff –Should you bother learning more of the map when you already found a way of satisfying the goal? –(At the end of elementary school, should you go ahead and “exploit” the 5 years of knowledge you gained by taking up a job; or explore a bit more by doing high-school, college, grad-school, post-doc…?) Most relevant sub-area: Reinforcement learning

1/17  Project 0 Due on Thursday…  Makeup class on Friday—TIME?…  Tuesday’s class time will be optional recitation for project

Given a state space of size n (or 2 v where v is the # state variables) the single-state problem searches for a path in the graph of size n (2 v ) the multiple-state problem searches for a path in a graph of size 2 n (2 2^v ) the contingency problem searches for a sub-graph in a graph of size 2 n (2 2^v ) Utility of eyes (sensors) is reflected in the size of the effective search space! In general, a subgraph rather than a tree (loops may be needed consider closing a faulty door ) 2 n is the EVIL that every CS student’s nightmares should be made of

The important difference from the graph-search scenario you learned in CSE 310 is that you want to keep the graph implicit rather than explicit (i.e., generate only that part of the graph that is absolutely needed to get the optimal path)  VERY important since for most problems, the graphs are ginormous tending to infinite..

Example: Robotic Path-Planning States: Free space regions Operators: Movement to neighboring regions Goal test: Reaching the goal region Path cost: Number of movements (distance traveled) I G hDhD

1/29 January 29, 2009 Mars Rover Disoriented Somewhat After Glitch By KENNETH CHANGKENNETH CHANG On the 1,800th Martian day of its mission, NASA’s Spirit rover blanked out, and it remains a bit disoriented.NASA Mission managers at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., reported Wednesday that the Spirit had behaved oddly on Sunday — the 1,800th Sol, or Martian day, since Spirit’s landing on Mars in January 2004.Jet Propulsion LaboratoryMars (A Martian Sol is 39.5 minutes longer than an Earth day. The Spirit and its twin, the Opportunity, were designed to last just 90 Sols each, but both continue to operate more than five years later.)Earth On that day, the Spirit acknowledged receiving its driving directions from Earth, but it did not move. More strangely, the Spirit had no memory of what it had done for that part of Sol The rover did not record actions, as it otherwise always does, to the part of its computer memory that retains information even when power is turned off, the so-called nonvolatile memory. “It’s almost as if the rover had a bout of amnesia,” said John Callas, the project manager for the rovers. Another rover system did record that power was being drawn from the batteries for an hour and a half. “Meaning the rover is awake doing something,” Dr. Callas said. But before-and-after images showed that the Spirit ended the day exactly where it started. On Monday, mission controllers told the Spirit to orient itself by locating the Sun in the sky with its camera, and it reported that it had been unable to do so. Dr. Callas said the camera did actually photograph the Sun, but it was not quite in the position the rover expected. Sun One hypothesis is that a cosmic ray hit the electronics and scrambled the rover’s memory. On Tuesday, the rover’s nonvolatile memory worked properly. The Spirit now reports to be in good health and responds to commands from Earth.

?? General Search

Search algorithms differ based on the specific queuing function they use All search algorithms must do goal-test only when the node is picked up for expansion We typically analyze properties of search algorithms on uniform trees --with uniform branching factor b and goal depth d (tree itself may go to depth d t )

Evaluating For the tree below, b=3 d=4

Breadth first search on a uniform tree of b=10 Assume 1000nodes expanded/sec 100bytes/node

Qn: Is there a way of getting linear memory search that is complete and optimal?

The search is “complete” now (since there is finite space to be explored). But still inoptimal.

IDDFS: Review

A B C D G DFS: BFS: IDDFS: A,B,G A,B,C,D,G (A), (A, B, G) Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space.

A B C D G DFS: BFS: IDDFS: A,B,G A,B,A,B,A,B,A,B,A,B (A), (A, B, G) Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space. Search on undirected graphs or directed graphs with cycles… Cycles galore…

Graph (instead of tree) Search: Handling repeated nodes Main points: --repeated expansions is a bigger issue for DFS than for BFS or IDDFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential --Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)