Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Introduction to Artificial Intelligence
Intelligent Agents Russell and Norvig: 2
Listening non-stop for 150min per week, for 16 weeks –4000$ (your tuition).. Re-viewing all the lecture videos on Youtube –100000$ (in lost girl friends/boy.
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
ECE457 Applied Artificial Intelligence R. Khoury (2007)Page 1 Please pick up a copy of the course syllabus from the front desk.
CSE 471/598 Introduction to Artificial Intelligence (aka the very best subject in the whole-wide-world) The Class His classes are hard; He is not.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment What action next? The $$$$$$ Question.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)
8/29. Administrative.. Bouncing mails –Qle01; jmussem; rbalakr2 Send me a working address for class list Blog posting issues Recitation session.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
CSE 471/598 Intro to AI (Lecture 1). Course Overview What is AI –Intelligent Agents Search (Problem Solving Agents) –Single agent search [Project 1]
Administrivia/Announcements Project 0 will be taken until Friday 4:30pm –If you don’t submit in the class, you submit to the dept office and ask them.
Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Artificial Intelligence Overview John Paxton Montana State University August 14, 2003.
Agents & Environments. © Daniel S. Weld Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n & Logical.
Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)
Playing an (entertaining) game of Soccer Solving NYT crossword puzzles at close to expert level Navigating in deep space Learning patterns in databases.
Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)
Leroy Garcia 1.  Artificial Intelligence is the branch of computer science that is concerned with the automation of intelligent behavior (Luger, 2008).
CSE 573 Artificial Intelligence Dan Weld Peng Dai
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Endgame Logistics  Final Project Presentations  Tuesday, March 19, 3-5, KEC2057  Powerpoint suggested ( to me before class)  Can use your own.
SWARM INTELLIGENCE Sumesh Kannan Roll No 18. Introduction  Swarm intelligence (SI) is an artificial intelligence technique based around the study of.
CNS 4470 Artificial Intelligence. What is AI? No really what is it? No really what is it?
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Constraint Satisfaction Problems (CSPs) CPSC 322 – CSP 1 Poole & Mackworth textbook: Sections § Lecturer: Alan Mackworth September 28, 2012.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
So what is AI?.
Rational Agents (Chapter 2)
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Agents and Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.5.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
CS382 Introduction to Artificial Intelligence Lecture 1: The Foundations of AI and Intelligent Agents 24 January 2012 Instructor: Kostas Bekris Computer.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agent Architectures Chapter 2 of AIMA.
ECE457 Applied Artificial Intelligence R. Khoury (2007)Page 1 Please pick up a copy of the course syllabus from the front desk.
ARTIFICIAL INTELLIGENCE
Intelligent Agents (Ch. 2)
Introduction to Artificial Intelligence
ECE 448 Lecture 3: Rational Agents
EA C461 – Artificial Intelligence Intelligent Agents
Rational Agents (Chapter 2)
Artificial Intelligence Lecture No. 5
© James D. Skrentny from notes by C. Dyer, et. al.
Introduction to Artificial Intelligence
Planning CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and.
EA C461 – Artificial Intelligence Problem Solving Agents
Intelligent Agents Chapter 2.
AI and Agents CS 171/271 (Chapters 1 and 2)
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Presentation transcript:

Open only for Humans; Droids and Robots should go for CSE 462 next door ;-)

Although we will see that all four views have motivations..

Do we want a machine that beats humans in chess or a machine that thinks like humans while beating humans in chess?  DeepBlue supposedly DOESN’T think like humans.. (But what if the machine is trying to “tutor” humans about how to do things?) (Bi-directional flow between thinking humanly and thinking rationally)

Mechanical flight became possible only when people decided to stop emulating birds… What if we are writing intelligent agents that interact with humans?  The COG project  The Robotic care givers

What AI can do is as important as what it can’t yet do.. Captcha project

Arms race to defeat Captchas… (using unwitting masses) Start opening an account at Yahoo.. Clip the captcha test Show it to a human trying to get into another site –Usually a site that has pretty pictures of the persons of apposite * sex Transfer their answer to the Yahoo Note: Apposite—not opposite. This course is nothing if not open minded

It can be argued that all the faculties needed to pass turing test are also needed to act rationally to improve success ratio…

Playing an (entertaining) game of Soccer Solving NYT crossword puzzles at close to expert level Navigating in deep space Learning patterns in databases (datamining…) Supporting supply-chain management decisions at fortune-500 companies Learning common sense from the web Navigating desert roads Navigating urban roads Bluffing humans in Poker.. Discuss on Class Blog

Architectures for Intelligent Agents Wherein we discuss why do we need representation, reasoning and learning

A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment What action next? The $$$$$$ Question

“history” = {s0,s1,s2……sn….} Performance = f(history) Expected Performance= E(f(history)) Rational != Intentionally avoiding sensing and prior knowledge

Partial contents of sources as found by Get Get,Post,Buy,.. Cheapest price on specific goods Internet, congestion, traffic, multiple sources Qn: How do these affect the complexity of the problem the rational agent faces?  Lack of percepts makes things harder  Lack of actions makes things harder…  Complex goals make things harder  How about the environment?

A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati Environment action perception Goals (Static vs. Dynamic) (Observable vs. Partially Observable) (perfect vs. Imperfect) (Deterministic vs. Stochastic) What action next? (Instantaneous vs. Durative) (Full vs. Partial satisfaction) The $$$$$$ Question

Yes No Yes #1 No >1 Accessible: The agent can “sense” its environment best: Fully accessible worst: inaccessible typical: Partially accessible Deterministic: The actions have predictable effects best: deterministic worst: non-deterministic typical: Stochastic Static: The world evolves only because of agents’ actions best: static worst: dynamic typical: quasi-static Episodic: The performance of the agent is determined episodically best: episodic worst: non-episodic Discrete: The environment evolves through a discrete set of states best: discrete worst: continuous typical: hybrid Agents: # of agents in the environment; are they competing or cooperating? #Agents

 Ways to handle:  Assume that the environment is more benign than it really is (and hope to recover from the inevitable failures…) Assume determinism when it is stochastic; Assume static even though it is dynamic;  Bite the bullet and model the complexity

Additional ideas/points covered Impromptu The point that complexity of behavior is a product of both the agent and the environment –Simon’s Ant in the sciences of the artificial The importance of modeling the other agents in the environment –The point that one reason why our brains are so large, evolutionarily speaking, may be that we needed them to outwit not other animals but our own enemies The issue of cost of deliberation and modeling –It is not necessary that an agent that minutely models the intentions of other agents in the environment will always win… The issue of bias in learning –Often the evidence is consistent with many many hypotheses. A small agent, to survive, has to use strong biases in learning. –Gavagai example and the whole-object hypothesis.

(Model-based reflex agents) How do we write agent programs for these?

This one already assumes that the “sensors  features” mapping has been done! Even basic survival needs state information..

EXPLICIT MODELS OF THE ENVIRONMENT --Blackbox models --Factored models  Logical models  Probabilistic models (aka Model-based Reflex Agents) State Estimation

A Robot localizing itself using particle filters

It is not always obvious what action to do now given a set of goals You woke up in the morning. You want to attend a class. What should your action be?  Search (Find a path from the current state to goal state; execute the first op)  Planning (does the same for structured—non-blackbox state models) State Estimation Search/ Planning

Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models Search Blind, Informed Planning Inference Logical resolution Bayesian inference How the course topics stack up…

--Decision Theoretic Planning --Sequential Decision Problems..certain inalienable rights—life, liberty and pursuit of ?Money ?Daytime TV ?Happiness (utility)

Discounting The decision-theoretic agent often needs to assess the utility of sequences of states (also called behaviors). –One technical problem is “How do keep the utility of an infinite sequence finite? –A closely related real problem is how do we combine the utility of a future state with that of a current state (how does 15$ tomorrow compare with 5000$ when you retire?) –The way both are handled is to have a discount factor r (0<r<1) and multiply the utility of n th state by r n r 0 U(s o )+ r 1 U(s 1 )+…….+ r n U(s n )+ Guaranteed to converge since power series converge for 0<r<n –r is set by the individual agents based on how they think future rewards stack up to the current ones An agent that expects to live longer may consider a larger r than one that expects to live shorter…

Learning Dimensions: What can be learned? --Any of the boxes representing the agent’s knowledge --action description, effect probabilities, causal relations in the world (and the probabilities of causation), utility models (sort of through credit assignment), sensor data interpretation models What feedback is available? --Supervised, unsupervised, “reinforcement” learning --Credit assignment problem What prior knowledge is available? -- “Tabularasa” (agent’s head is a blank slate) or pre-existing knowledge