Thinking Cap 1 Declared a resounding success! 16 Unique students.

Slides:



Advertisements
Similar presentations
BEST FIRST SEARCH - BeFS
Advertisements

Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Heuristic Search techniques
Artificial Intelligence Solving problems by searching
Planning with Non-Deterministic Uncertainty (Where failure is not an option) R&N: Chap. 12, Sect (+ Chap. 10, Sect 10.7)
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)
Listening non-stop for 150min per week, for 16 weeks –4000$ (your tuition).. Re-viewing all the lecture videos on Youtube –100000$ (in lost girl friends/boy.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Best-First Search: Agendas
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
CS 380: Artificial Intelligence Lecture #3 William Regli.
 Lisp recitation after class in the same room.
A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment What action next? The $$$$$$ Question.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
8/29. Administrative.. Bouncing mails –Qle01; jmussem; rbalakr2 Send me a working address for class list Blog posting issues Recitation session.
Administrivia/Announcements Project 0 will be taken until Friday 4:30pm –If you don’t submit in the class, you submit to the dept office and ask them.
Structures and Strategies for State Space Search
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
4/1 Agenda: Markov Decision Processes (& Decision Theoretic Planning)
9/23. Announcements Homework 1 returned today (Avg 27.8; highest 37) –Homework 2 due Thursday Homework 3 socket to open today Project 1 due Tuesday –A.
Class of 28 th August. Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and.
Solving problems by searching This Lecture Read Chapters 3.1 to 3.4 Next Lecture Read Chapter 3.5 to 3.7 (Please read lecture topic material before and.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
3.0 State Space Representation of Problems 3.1 Graphs 3.2 Formulating Search Problems 3.3 The 8-Puzzle as an example 3.4 State Space Representation using.
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
Planning with Non-Deterministic Uncertainty. Recap Uncertainty is inherent in systems that act in the real world Last lecture: reacting to unmodeled disturbances.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
For Monday Read chapter 4, section 1 No homework..
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Robotics Club: 5:30 this evening
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Artificial Intelligence Lecture No. 6 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Solving problems by searching A I C h a p t e r 3.
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
Brian Williams, Fall 041 Analysis of Uninformed Search Methods Brian C. Williams Sep 21 st, 2004 Slides adapted from: Tomas Lozano Perez,
For Monday Read chapter 4 exercise 1 No homework.
Artificial Intelligence Solving problems by searching.
Lecture 3: Uninformed Search
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Problem Solving by Searching
Artificial Intelligence
Solving problems by searching
Solving problems by searching
Artificial Intelligence
Solving problems by searching
CS 416 Artificial Intelligence
CS 416 Artificial Intelligence
Solving problems by searching
Presentation transcript:

Thinking Cap 1 Declared a resounding success! 16 Unique students

Administrative Final class tally: –Total 43 –CSE 471: 31 [72%; 5 junior; 26 senior] –CSE 598: 12 [28%; 3 PhD, 9 MS] Grader for the class: –Xin Sun (took CSE 471 in Fall 2007) –Will work with Yunsong Meng

(Model-based reflex agents) How do we write agent programs for these?

This one already assumes that the “sensors  features” mapping has been done! Even basic survival needs state information..

EXPLICIT MODELS OF THE ENVIRONMENT --Blackbox models --Factored models  Logical models  Probabilistic models (aka Model-based Reflex Agents) State Estimation

It is not always obvious what action to do now given a set of goals You woke up in the morning. You want to attend a class. What should your action be?  Search (Find a path from the current state to goal state; execute the first op)  Planning (does the same for structured—non-blackbox state models) State Estimation Search/ Planning

Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models Search Blind, Informed Planning Inference Logical resolution Bayesian inference How the course topics stack up…

--Decision Theoretic Planning --Sequential Decision Problems..certain inalienable rights—life, liberty and pursuit of ?Money ?Daytime TV ?Happiness (utility)

Discounting The decision-theoretic agent often needs to assess the utility of sequences of states (also called behaviors). –One technical problem is “How do keep the utility of an infinite sequence finite? –A closely related real problem is how do we combine the utility of a future state with that of a current state (how does 15$ tomorrow compare with 5000$ when you retire?) –The way both are handled is to have a discount factor r (0<r<1) and multiply the utility of n th state by r n r 0 U(s o )+ r 1 U(s 1 )+…….+ r n U(s n )+ Guaranteed to converge since power series converge for 0<r<n –r is set by the individual agents based on how they think future rewards stack up to the current ones An agent that expects to live longer may consider a larger r than one that expects to live shorter…

Learning Dimensions: What can be learned? --Any of the boxes representing the agent’s knowledge --action description, effect probabilities, causal relations in the world (and the probabilities of causation), utility models (sort of through credit assignment), sensor data interpretation models What feedback is available? --Supervised, unsupervised, “reinforcement” learning --Credit assignment problem What prior knowledge is available? -- “Tabularasa” (agent’s head is a blank slate) or pre-existing knowledge

Problem Solving Agents (Search-based Agents)

The important difference from the graph-search scenario you learned in CSE 310 is that you want to keep the graph implicit rather than explicit (i.e., generate only that part of the graph that is absolutely needed to get the optimal path)  VERY important since for most problems, the graphs are ginormous tending to infinite..

What happens when the domain Is inaccessible?

Search in Multi-state (inaccessible) version Set of states is Called a “Belief State” So we are searching in the space of belief states Notice that actions can sometimes Reduce state- uncertainty Sensing reduces State Uncertainty Space of belief- states is exponentially larger than space of states. If you throw in likelihood of states in a belief state the resulting state-space is infinite!

Will we really need to handle multiple-state problems? Can’t we just buy better cameras? so our agents can always tell what state they are in? It is not just a question of having good pair or eyes.. Otherwise, why do malls have the maps of the malls with “here you are” annotation in the map? –The problem of localizing yourself in a map is a non-trivial one..

State-spaces with Non-deterministic actions correspond to hyper-graphs s1s2 s3 s4 a1a2 s5 a1 a3 Solution: If in s4 do a2 if in s2 do a3 if in s2 do a1 But can be made graphs in The belief space… {S2} a3 {S1,S3,S4} {S5} a1 {S1,S3,S4,S2} a2

Medicate without killing.. A healthy (and alive) person accidentally walked into Springfield nuclear plant and got irradiated which may or may not have given her a disease D. The medication M will cure her of D if she has it; otherwise, it will kill her There is a test T which when done on patients with disese D, turns their tongues red R You can observe with Look sensors to see if the tongue is pink or not We want to cure the patient without killing her.. (D,A) (~D,A) (~D,~A) Medicate (D,A,R) (~D,A,~R) test (D,A,R) (~D,A,~R) Is Tongue Red? yn (~D,A,R) Medicate Sensing “partitions” belief state (A) Radiate Non-det Actions Are normal Edges in Belief-space (but hyper Edges in The original State space

Unknown State Space When you buy Roomba does it have the “layout” of your home? –Fat chance! For 200$, they aren’t going to customize it to everyone’s place! When map is not given, the robot needs to both “learn” the map, and “achieve the goal” –Integrates search/planning and learning Exploration/Exploitation tradeoff –Should you bother learning more of the map when you already found a way of satisfying the goal? –(At the end of elementary school, should you go ahead and “exploit” the 5 years of knowledge you gained by taking up a job; or explore a bit more by doing high-school, college, grad-school, post-doc…?) Most relevant sub-area: Reinforcement learning

Given a state space of size n (or 2 v where v is the # state variables) the single-state problem searches for a path in the graph of size n (2 v ) the multiple-state problem searches for a path in a graph of size 2 n (2 2^v ) the contingency problem searches for a sub-graph in a graph of size 2 n (2 2^v ) Utility of eyes (sensors) is reflected in the size of the effective search space! In general, a subgraph rather than a tree (loops may be needed consider closing a faulty door ) 2 n is the EVIL that every CS student’s nightmares should be made of

Example: Robotic Path-Planning States: Free space regions Operators: Movement to neighboring regions Goal test: Reaching the goal region Path cost: Number of movements (distance traveled) I G hDhD

1/29 January 29, 2009 Mars Rover Disoriented Somewhat After Glitch By KENNETH CHANGKENNETH CHANG On the 1,800th Martian day of its mission, NASA’s Spirit rover blanked out, and it remains a bit disoriented.NASA Mission managers at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., reported Wednesday that the Spirit had behaved oddly on Sunday — the 1,800th Sol, or Martian day, since Spirit’s landing on Mars in January 2004.Jet Propulsion LaboratoryMars (A Martian Sol is 39.5 minutes longer than an Earth day. The Spirit and its twin, the Opportunity, were designed to last just 90 Sols each, but both continue to operate more than five years later.)Earth On that day, the Spirit acknowledged receiving its driving directions from Earth, but it did not move. More strangely, the Spirit had no memory of what it had done for that part of Sol The rover did not record actions, as it otherwise always does, to the part of its computer memory that retains information even when power is turned off, the so-called nonvolatile memory. “It’s almost as if the rover had a bout of amnesia,” said John Callas, the project manager for the rovers. Another rover system did record that power was being drawn from the batteries for an hour and a half. “Meaning the rover is awake doing something,” Dr. Callas said. But before-and-after images showed that the Spirit ended the day exactly where it started. On Monday, mission controllers told the Spirit to orient itself by locating the Sun in the sky with its camera, and it reported that it had been unable to do so. Dr. Callas said the camera did actually photograph the Sun, but it was not quite in the position the rover expected. Sun One hypothesis is that a cosmic ray hit the electronics and scrambled the rover’s memory. On Tuesday, the rover’s nonvolatile memory worked properly. The Spirit now reports to be in good health and responds to commands from Earth.

?? General Search

Search algorithms differ based on the specific queuing function they use All search algorithms must do goal-test only when the node is picked up for expansion We typically analyze properties of search algorithms on uniform trees --with uniform branching factor b and goal depth d (tree itself may go to depth d t )

Evaluating For the tree below, b=3 d=4

Breadth first search on a uniform tree of b=10 Assume 1000nodes expanded/sec 100bytes/node

Qn: Is there a way of getting linear memory search that is complete and optimal?

The search is “complete” now (since there is finite space to be explored). But still inoptimal.

IDDFS: Review

A B C D G DFS: BFS: IDDFS: A,B,G A,B,C,D,G (A), (A, B, G) Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space.

A B C D G DFS: BFS: IDDFS: A,B,G A,B,A,B,A,B,A,B,A,B (A), (A, B, G) Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space. Search on undirected graphs or directed graphs with cycles… Cycles galore…

Graph (instead of tree) Search: Handling repeated nodes Main points: --repeated expansions is a bigger issue for DFS than for BFS or IDDFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential --Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)