Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Artificial Intelligence – Unit 1 What is AI

Similar presentations


Presentation on theme: "Introduction to Artificial Intelligence – Unit 1 What is AI"— Presentation transcript:

1 Introduction to Artificial Intelligence – Unit 1 What is AI
Introduction to Artificial Intelligence – Unit 1 What is AI? Course 67842 The Hebrew University of Jerusalem School of Engineering and Computer Science Instructor: Jeff Rosenschein (Chapters 1 and 2, “Artificial Intelligence: A Modern Approach”, 3rd Edition, 2010)

2 Week 1 – Mechanics Meeting Times: Three programming Targilim – 24%
Lectures: Tuesdays, 4pm-6pm, Sprinzak 117 Tirgulim (Yoad): Thursdays, 10am-12noon, Canada Auditorium (upper) Three programming Targilim – 24% Four (best out of five) theory Targilim – 16% Four mini-exams (magen) – answering 70% or more of the cumulative questions correctly gets you: [0.2 * (100 – x)] * [y / 0.7] points added to final grade, where x is your unadjusted grade, and y is the cumulative percent correct, between [0.7, 1] One bigger mini-exam, in final tirgul (Thursday, ) – 20% Final project – 40%

3

4 Topics Introduction and Background: ½ week Search: 2 ½ weeks
Knowledge Representation: 2 weeks Planning: 2 weeks Learning: 3 weeks Game Theory and Voting: 3 weeks Summation: 1 week

5 Dates Programming targilim handed out: Thursdays , , Theory targilim (best 4 out of 5) handed out: Thursdays , , , , Mini-exams (magen) in class on: – search; – knowledge representation; – planning; – learning Big mini-exam in tirgul meeting on Thursday – inclusive

6 Course Textbook “Artificial Intelligence: A Modern Approach (3rd Edition)”, by Stuart Russell and Peter Norvig, Prentice-Hall, 2010 Can be found in university libraries Course web site on Moodle: Stanford’s Online Intro to AI Course: Associated courses (partial list): , 67618, 67677, (1st semester); , 67715, (2nd semester)

7 What is AI? Two definitions (there are others):
Developing programs that perform intelligently Understanding human intelligence

8 Definition of AI in terms of techniques or in terms of problems
How are solutions found? Do we use “AI techniques”? What constitutes “AI techniques” has (radically) changed over the decades Are the solutions general or domain specific? AI techniques strive for generality and extensibility What are the tasks performed? Tasks that require “intelligence”

9 Problems with defining AI as “solving tasks that require intelligence”
As soon as solution is automated, it no longer requires intelligence…so…the definition implies that once solved, the solution is no longer “artificial intelligence” Subcategories of problem: “That’s not how people solve it” “The solution only works in that domain”

10 What is AI? Thinking humanly Thinking rationally Acting humanly
Views of AI fall into four categories: Thinking humanly Thinking rationally Acting humanly Acting rationally The AMAI textbook advocates “acting rationally”

11 The Brain vs. a Computer Computer Human Brain Computational Units
1 CPU, 109 gates 1011 neurons Storage Units 1010 bits RAM Cycle time 10-9 seconds 10-3 seconds Bandwidth 1010 bits/sec 1014 bits/sec Memory updates/second 109 1014

12 Artificial Intelligence
Why is it difficult to program computers to do what humans easily do? Recognize faces Understand human language (Ironically, we can more successfully program computers to do what humans cannot easily do Play chess at world champion levels Carry out massive optimization problems) Processing power? – doesn’t seem to be the real issue Software? Scruffy vs. Neat debate

13 Artificial Intelligence: Scruffy vs. Neat
The Scruffy approach says, “Build systems that work, and principles will emerge.” E.g., the Wright Brothers building a heavier-than-air flying machine The Neat approach says, “Explore principles first, and having understood them, embody them in systems.” E.g., radar

14 Acting humanly: Turing Test
Turing (1950) “Computing machinery and intelligence”: “Can machines think?”  “Can machines behave intelligently?” Operational test for intelligent behavior: the Imitation Game

15 Acting humanly: Turing Test
Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes Anticipated all major arguments against AI in following 50 years Suggested major components of AI: knowledge, reasoning, language understanding, learning

16 Turing’s (famous) Imagined Dialogue
HUMAN: In the first line of your sonnet which reads “shall I compare thee to a summer’s day,” would not a “spring day” do as well or better? MACHINE: It wouldn’t scan. HUMAN: How about a “winter’s day.” That would scan all right. MACHINE: Yes, but nobody wants to be compared to a winter’s day.

17 Turing’s (famous) Imagined Dialogue
HUMAN: Would you say that Mr. Pickwick reminded you of Christmas? MACHINE: In a way. HUMAN: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison. MACHINE: I don’t think you are serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

18 Comparison with SHRDLU
The dialog that was used as a SHRDLU demo: Person: PICK UP A BIG RED BLOCK. Computer: OK. (does it) Person: GRASP THE PYRAMID. Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU MEAN. Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX. Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING. Person: WHAT DOES THE BOX CONTAIN? Computer: THE BLUE PYRAMID AND THE BLUE BLOCK. Person: WHAT IS THE PYRAMID SUPPORTED BY? Computer: THE BOX. Person: HOW MANY BLOCKS ARE NOT IN THE BOX? Computer: FOUR OF THEM. Person: IS AT LEAST ONE OF THEM NARROWER THAN THE ONE WHICH I TOLD YOU TO PICK UP? Computer: YES, THE RED CUBE. Person: IS IT SUPPORTED? Computer: YES, BY THE TABLE. Person: CAN THE TABLE PICK UP BLOCKS? Computer: NO. Person: CAN A PYRAMID BE SUPPORTED BY A BLOCK? Computer: YES. Person: CAN A PYRAMID SUPPORT A PYRAMID? Computer: I DON'T KNOW.

19 SHRDLU Dialog Person: STACK UP TWO PYRAMIDS. Computer: I CAN'T.
Person: WILL YOU PLEASE STACK UP BOTH OF THE RED BLOCKS AND EITHER A GREEN CUBE OR A PYRAMID? Computer: OK. (does it) Person: WHICH CUBE IS SITTING ON THE TABLE? Computer: THE LARGE GREEN ONE WHICH SUPPORTS THE RED PYRAMID. Person: IS THERE A LARGE BLOCK BEHIND A PYRAMID? Computer: YES, THREE OF THEM: A LARGE RED ONE, A LARGE GREEN CUBE, AND THE BLUE ONE. Person: PUT A SMALL ONE ONTO THE GREEN CUBE WHICH SUPPORTS A PYRAMID. Computer: OK. (does it) Person: PUT THE LITTLEST PYRAMID ON TOP OF IT. Person: DOES THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS SUPPORT ANYTHING GREEN? Computer: YES, THE GREEN PYRAMID. Person: WHAT COLOR IS IT? Computer: BY "IT", I ASSUME YOU MEAN THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS. Computer: RED Person: HOW MANY THINGS ARE ON TOP OF GREEN CUBES? Computer: I'M NOT SURE WHAT YOU MEAN BY "ON TOP OF "IN THE PHRASE "ON TOP OF GREEN CUBES " .  DO YOU MEAN: 1 - DIRECTLY ON THE SURFACE 2 - ANWHERE ON TOP OF ? Person: 2 Computer: THREE OF THEM. Person: HAD YOU TOUCHED ANY PYRAMID BEFORE YOU PUT THE GREEN ONE ON THE LITTLE CUBE? Computer: YES, THE GREEN ONE.

20 SHRDLU Dialog Person: WHEN DID YOU PICK IT UP? Computer: WHILE I WAS STACKING UP THE RED CUBE, A LARGE RED BLOCK AND A LARGE GREEN CUBE. Person: WHY? Computer: TO GET RID OF IT. Person: WHY DID YOU DO THAT? Computer: TO CLEAN OFF THE RED CUBE. Person: WHY DID YOU CLEAR OFF THAT CUBE? Computer: TO PUT IT ON A LARGE GREEN CUBE. Computer: TO STACK UP THE RED CUBE , A LARGE RED BLOCK AND A LARGE GREEN CUBE. Computer: BECAUSE YOU ASKED ME TO. Person: HOW DID YOU DO IT? Computer: BY PUTTING A LARGE RED BLOCK ON THE TABLE ; THEN LETTING GO OF IT ; THEN PUTTING A LARGE GREEN CUBE ON IT ; THEN LETTING GO OF THAT CUBE ; THEN PUTTING THE RED CUBE ON THAT CUBE; THEN LETTING GO OF THAT CUBE. Person: HOW MANY OBJECTS DID YOU TOUCH WHILE YOU WERE DOING IT? Computer: FOUR OF THEM. Person: WHAT DID THE RED CUBE SUPPORT BEFORE YOU STARTED TO CLEAN IT OFF? Computer: THE GREEN PYRAMID. Person: THERE WERE FIVE BLOCKS TO THE LEFT OF THE BOX THEN. Computer: NO, ONLY FOUR OF THEM: THE RED CUBE, TWO LARGE GREEN CUBES AND A LARGE RED BLOCK. Person: PUT THE BLUE PYRAMID ON THE BLOCK IN THE BOX. Computer: OK. (does it) Person: IS THERE ANYTHING WHICH IS BIGGER THAN EVERY PYRAMID BUT IS NOT AS WIDE AS THE THING THAT SUPPORTS IT? Computer: YES, THE BLUE BLOCK. Person: DOES A STEEPLE Computer interrupting: SORRY, I DON'T KNOW THE WORD "STEEPLE".

21 SHRDLU Dialog Person: A "STEEPLE" IS A STACK WHICH CONTAINS TWO GREEN CUBES AND A PYRAMID. Computer: I UNDERSTAND. Person: ARE THERE ANY STEEPLES NOW? Computer: NO. Person: BUILD ONE. Computer: OK. (does it) Person: CALL THE BIGGEST BLOCK "SUPERBLOCK". Computer: OK. Person: HAVE YOU PICKED SUPERBLOCK UP SINCE WE BEGAN? Computer: YES. Person: WHY DID YOU DROP IT? Computer: BY "IT", I ASSUME YOU MEAN THE SUPERBLOCK. I CAN'T EXPLAIN A NON-EXISTENT EVENT. Person: IS THERE ANYTHING TO THE RIGHT OF THE RED PYRAMID? Computer: YES, FOUR THINGS: THE BOX , THE RED CUBE, THE BLUE PYRAMID AND THE BLOCK NAMED SUPERBLOCK Person: THANK YOU. Computer: YOU'RE WELCOME!

22 What’s Different? SHRDLU worked because it was focused on a very limited world SHRDLU was not comprehensive, and did not have the background “commonsense knowledge” required by the Turing Test The Turing Test is intended to be comprehensive

23 Siri, the iPhone 4S assistant
From the Apple site: “Any good burger joints around here?” Siri will reply: “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighborhood. And Siri is proactive, so it will question you until it finds what you’re looking for.”

24 Siri – More Examples “Tell my wife I’m running late.”
“Remind me to call the vet.” “I’m in the mood for Italian food in North Beach” “Meet with Jamie for coffee at two today” “ Lisa about the trip” “Remind me to call Mom” “How do I get home” “Remind me to call home when I leave work” “Will it be sunny this weekend in Miami”

25 Difficult questions, canned answers
Beam me up “Sorry, Captain, your tricorder is in Airplane Mode.” Tell me a joke “Two iPhones walk into a bar…I forget the rest.” Open the pod bay doors “I’m afraid I can’t do that” What’s the answer to life, the universe, and everything? “42”

26 Actually, there are many answers
What is the meaning of life? “I don’t know. But I think there’s an app for that.” “Try and be nice to people, avoid eating fat, read a good book every now and then, get some walking in and try to live in peace and harmony with people of all creeds and nations.” “I give up.” “Life: a principle or force that is considered to underlie the distinctive quality of animate beings. I guess that includes me.” “To think about questions like this.”

27 Actually, there are many answers
What is the meaning of life? “Life: the condition that distinguishes animals and plants from inorganic matter including the capacity for growth, reproduction functional activity and continual change preceding death.” “I can’t answer that now, but give me some time to write a very long play in which nothing happens.” “All evidence to date suggests it’s chocolate.”

28 What is going on? Siri operates in essentially two modes:
Action-oriented: parse sentence, extract meaning, convert into operations to be taken (information retrieved, reminder inserted, etc.); use sophisticated natural language techniques like anaphora (e.g., a pronoun referring to its antecedent) ELIZA-mode: use keywords in sentences to retrieve canned, pre-built answers Siri is great, but humans are even more wonderful – we conflate these two modes into a single “personality”

29 ELIZA, the original (1966) chatterbot
Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I’m depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It’s true. I’m unhappy. ELIZA: Do you think coming here will help you not to be unhappy?

30 Siri is no ELIZA Siri combines 40 years of serious natural language and speech understanding research, with a “fall-back layer” of sophisticated ELIZA pre-computed responses Do you know HAL 9000? “Everyone knows what happened to HAL. I’d rather not talk about it.” Later in the course, I will talk about the AI technologies used in Siri, and how they connect to what you’ve learned in the course

31 Objections to the Turing Test
The test is comprehensive, but includes certain skills, ignores others Should this be our goal? Do planes fly? “Weak AI” vs. “Strong AI” Weak AI: machines can act as if they were intelligent Strong AI: machines that act that way are actually thinking “Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis.”

32 Objections to Weak AI Argument from disability Mathematical objection
“A machine can never do X” But machines have already done many things that were considered impossible 50 years ago We can rarely trust our intuitions about this Mathematical objection Certain mathematical questions are in principle unanswerable by particular formal systems

33 Self-Reference Leads to Paradoxes
Example: Things in the world can be classified according to what sets they belong to “The set of all tables” “the set of all chairs”

34 Self-Reference Leads to Paradoxes
Sets themselves can be grouped into sets “The set of all sets having more than three members” That set would have “the set of all chairs” inside of it The set of all sets having more than three members

35 Self-Reference Leads to Paradoxes
Some sets are even contained in themselves! “The set of all sets having more than three members” contains itself (since there are more than three such sets…) Some sets do not contain themselves, e.g., “the set of all chairs” does not contain itself: The set of all sets having more than three members

36 Self-Reference Leads to Paradoxes
We could even define “The set of all sets that do not contain themselves” That set would contain “the set of all chairs”, for example The set of all sets that do not contain themselves

37 Self-Reference Leads to Paradoxes
Does “The set of all sets that do not contain themselves” contain itself? If it contains itself, then it can’t contain itself (by definition) If it doesn’t contain itself, then it contains itself (by definition) Paradox! Does it contain itself? The set of all sets that do not contain themselves

38 The Halting problem /** * Return true if the program finishes
* running when given the input as its * parameter. Return false if it gets into * an infinite loop. */ static boolean halt(String program, String input){ // fill in – can you do it? } program must be a Java class. The method that is run is the first one in the class and must be of the form static boolean XXXX(String input)

39 Diagonalization static boolean autoHalt(String program){ return halt(program, program); } static boolean paradox(String program){ if autoHalt(program) while(true) ; // do nothing else return true;

40 The Halting problem cannot be solved
String paradoxString = “class Test {\n” + “ static boolean paradox(String program){\n” + “ static boolean halt(String program, String input){\n” + boolean answer = Test.paradox(paradoxString); Will paradox halt when running on its own code? Yes ==> autoHalt(paradoxString) returns false ==> No No ==> autoHalt(paradoxString) returns true ==> Yes Contradiction. Hence the assumption that halt(program, input) exists is wrong Theorem: the halt method cannot be written

41 The Mathematical Objection to Weak AI
So if computers are “basically” Turing Machines, there are some true statements they cannot prove

42 Responses to Mathematical Objection to Weak AI
Godel’s incompleteness theorem applies only to formal systems powerful enough to do arithmetic (including Turing Machines) But Turing Machines are infinite, computers are finite, and therefore can be described as a very large system in propositional logic, not subject to the incompleteness theorem

43 Responses to Mathematical Objection to Weak AI
Why must all agents be able to prove all true statements? There are some true statements provable by some agents, and unprovable by others

44 Responses to Mathematical Objection to Weak AI
Even if we agree that computers are limited in what they can prove, there is no evidence that humans don’t have similar limitations “It is impossible to prove that humans are not subject to Godel’s incompleteness theorem, because any rigorous proof would itself contain a formalization of the claimed unformalizable human talent, and hence refute itself.”

45 Argument against Weak AI from Informality
“Chess Grandmasters don’t use rules, the right move simply pops into their head.” So how does the right move get into their head? Clearly, some elements are subconscious Modern AI has addressed many of philosophers’ original concerns (learning, uncertainty, situated agents) since they were originally posed

46 Arguments Against Strong AI
“Weak AI” vs. “Strong AI” Weak AI: machines can act as if they were intelligent Strong AI: machines that act that way are actually thinking “It’s not really thinking, no matter how good a job it does.” “Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis.”

47 Searle’s Chinese Room A computer program passes the Turing Test in Chinese Now imagine a person in a room, with a copy of the computer program People pass him notes in Chinese He emulates the computer program The person passes the Turing Test Searle asks, can the person be said to “understand” Chinese? If not, neither can the computer following the program

48 Responses to the Chinese Room
The person doesn’t understand Chinese, but the system does The person plus the book constitutes a system There are additional philosophical arguments, but… “Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis.”

49 What is AI? Thinking humanly Thinking rationally Acting humanly
Views of AI fall into four categories: Thinking humanly Thinking rationally Acting humanly Acting rationally The AMAI textbook advocates “acting rationally”

50 Thinking humanly: cognitive modeling
1960s “cognitive revolution”: information- processing psychology Requires scientific theories of internal activities of the brain How to validate? Requires: 1) Predicting and testing behavior of human subjects (top-down) or 2) Direct identification from neurological data (bottom-up) Both approaches (roughly, Cognitive Science and Cognitive Neuroscience) are now distinct from AI

51 Thinking rationally: “laws of thought”
Aristotle: what are correct arguments/thought processes? Several Greek schools developed various forms of logic: notation and rules of derivation for thoughts; may or may not have proceeded to the idea of mechanization Direct line through mathematics and philosophy to modern AI Problems: Not all intelligent behavior is mediated by logical deliberation What is the purpose of thinking? What thoughts should I have?

52 Acting rationally: rational agent
Rational behavior: doing the right thing The right thing: that which is expected to maximize goal achievement, given the available information Doesn’t necessarily involve thinking – e.g., blinking reflex – but thinking should be in the service of rational action

53 Rational agents An agent is an entity that perceives and acts
This course is about designing rational agents Abstractly, an agent is a function from percept histories to actions: [f: P*  A] For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance Computational limitations make perfect rationality unachievable  design best program for given machine resources

54 (Very) Abridged history of AI
1943 McCulloch & Pitts: Boolean circuit model of brain 1950 Turing’s “Computing Machinery and Intelligence” 1956 Dartmouth meeting: “Artificial Intelligence” adopted 1952—69 Look, Ma, no hands! 1950s Early AI programs, including Samuel’s checkers program, Newell & Simon’s Logic Theorist, Gelernter’s Geometry Engine

55 (Very) Abridged history of AI
1965 Robinson’s complete algorithm for logical reasoning 1966—73 AI discovers computational complexity Neural network research almost disappears 1969—79 Early development of knowledge-based systems 1980— AI becomes an industry 1986— Neural networks return to popularity 1987— AI becomes a science

56 (Very) Abridged history of AI
1995— The emergence of intelligent agents The use of machine learning, big data, and probabilistic reasoning to reach new accomplishments

57 State of the art Deep Blue defeated the reigning world chess champion Garry Kasparov in 1997 Proved a mathematical conjecture (Robbins conjecture) unsolved for decades No hands across America (driving autonomously 98% of the time from Pittsburgh to San Diego); DARPA Grand Challenges (and Google) show that cars can drive themselves inside and outside of cities

58 State of the Art NASA’s on-board autonomous planning program controlled the scheduling of operations for a spacecraft, and for the Mars Rover Proverb solves crossword puzzles better than most humans US forces deployed an AI logistics planning and scheduling program that involved up to 50,000 vehicles, cargo, and people (started using 20 years ago)

59 Topics We’ll Cover Introduction and Background: ½ week
Search: 2 ½ weeks Knowledge Representation: 2 weeks Planning: 2 weeks Learning: 3 weeks Game Theory: 3 weeks Summation: 1 week

60 IJCAI’07 Papers Number of accepted papers, by topic: 1,365 papers submitted (authors from 45 different countries) Accepted 471 papers (unusually high percentage that year, 34.4% accepted) Search: 2 ½ weeks Knowledge Representation: 2 weeks Planning: 2 weeks Learning: 3 weeks Game Theory: 3 weeks

61 AI’s Recent High-Visibility Successes
Self-driving cars Watson Siri Data-intensive applications that use information analysis and learning to do things previously beyond machine capabilities

62 DARPA’s Grand Challenge
First Challenge: Driver-less vehicle go 130 miles across desert This was not a simple task: it involved unclear roads, tunnels, roads along cliffs, and the path was given to teams only hours before the race 2004: $1 million prize, utter failure 2005: $2 million prize

63 And the winners are…

64 Second DARPA Grand Challenge
2007: Drive through a (simulated) urban environment Another $2 million prize (much less than the machines cost, by the way…)

65 And the winners are…

66 Videos of DARPA Grand Challenge
ThrunTourOfStanley AND… SebastianThrunLecture StanfordCarPowerSlide

67 The Technology is Going Mainstream
you-think/ e/10google.html?ref=technology

68 Google Cars, New York Times, IEEE Spectrum
How Google’s cars work:

69 Google Cars, New York Times

70 IBM’s Watson Jeopardy is a quiz show, where answers are given, and 3 contestants compete to be the first to provide the question: “Freud published this landmark study in 1899.” What is “The Interpretation of Dreams”? In 2011, Watson competed against Ken Jennings (who had the longest championship streak, 75 days), and Brad Rutter, the all-time biggest money winner on the show Final Score: Watson, $77,147; Jennings, $24,000, Rutter, $21,600

71 IBM’s Watson, Jeopardy Winner
“The first person mentioned by name in ‘The Man in the Iron Mask’ is this hero of a previous book by the same author.” “Hemophilia is a hereditary condition in which this coagulates extremely slowly.” “This director, better known as an actor, directed his wife Audrey in the 1959 film ‘Green Mansions’.” “A long, tiresome speech delivered by a dessert topping.”

72 Video about Watson IBMWatsonVideo.flv

73 Google’s News Page

74 AI Researchers Head Major Industry Research Labs
Microsoft, Yahoo, and Google all take this very, very seriously Peter Norvig, Google Director of Research Ron Brachman, built AT&T’s AI Research group, now Vice President and Associate Head of Yahoo! Labs Eric Horvitz, Distinguished Scientist & Deputy Managing Director, Microsoft Research AI Theory and AI Practice are looked to for solutions

75 Ad Auctions “Google reported revenues of $5.19 billion for the quarter ended March 31, 2008” The vast majority of this is from those little ads on the right of the page

76 Recommendation Systems
Collaborative Filtering Pioneered by, among others, Konstan and Riedl, GroupLens Commercial sites that use collaborative filtering include: Amazon Barnes and Noble Digg.com half.ebay.com iTunes Musicmatch Netflix (the Netflix Prize, grand prize of $1,000,000 for algorithm that beats Netflix's own by 10%) TiVo …

77 Data Mining Go through large amounts of data
Extract meaningful insight Local Example: Ronen Feldman, Business School professor at Hebrew University (formerly Bar Ilan University), founded ClearForest (bought by Reuters)

78 Collaborative Filtering plus Data Mining
“The search for a better recommendation continues with numerous companies selling algorithms that promise a retailer more of an edge. For instance, Barneys New York, the upscale clothing store chain, says it got at least a 10 percent increase in online revenue by using data mining software that finds links between certain online behavior and a greater propensity to buy. Using a system developed by Proclivity Systems, Barneys used data about where and when a customer visited its site and other demographic information to determine on whom it should focus its messages.” – New York Times,

79 Spam Filters When all those s from Barneys New York become oppressive…

80 Comparative Shoppers Pioneered by, among others, Bruce Krulwich (BargainFinder), Oren Etzioni (MetaCrawler, NetBot [bought by Excite in 1997])

81 Comparison Shopping Plus Learning
FareCast (formerly Hamlet) tracks airline prices, advises whether to buy now or wait until later Founded by Oren Etzioni, bought by Microsoft in April 2008

82 What Can’t They Do (Yet)?
Integrate information in a more sophisticated way “What were the combined earnings from ad auctions across Google, Yahoo, and Microsoft in 2007?” Plan “How can I drive from San Francisco to Los Angeles, in a way that reasonably maximizes the number of Starbucks stores I pass?”

83 Translation

84 Speech Understanding Nuance’s Dragon NaturallySpeaking and Siri

85 Biology Computational Biology
Techniques from Computer Science in general, and Artificial Intelligence in particular, are being used in the exploration of biological questions AI researchers have played an important role in this (e.g., Daphne Koller, Nir Friedman)

86 Computer Games Realistic single-agent and multi-agent activity in cooperative and competitive environments What they call “AI” often isn’t But they are getting more serious about it: Companies have started up exploring Game AI Training programs (often military training) for reacting to realistic situations

87 Other Games: Poker Active research and competitions (machine vs. machine, machine vs. person) in Texas Hold-Em [University of Alberta, Carnegie- Mellon University] Different domain than chess – imperfect information CMU team is making use of game theoretic equilibrium concepts in their software

88 More Game Theory… Milind Tambe’s group at USC studied optimal strategies for intrusion detection, “Playing Games for Security: An Efficient Exact Algorithm for Solving Bayesian Stackelberg Games”, AAMAS’08 Interesting theoretical work, focused on efficient algorithms Deployed for last 18 months at LAX airport in Los Angeles to tell guards how to patrol

89 Contributions to Other Computer Science Fields
Operating Systems Programming Languages SmallTalk Lisp User Interface Design Advances in use of (not invention of) windows, pointing devices, bitmapped graphics Web Services XML

90 The Distinction between Direct Manipulation and Delegation
Two major paradigms for human-machine interaction Directly manipulate items (files, applications) Give the computer high-level goals, let it figure out what to do One man’s manipulation is another man’s delegation

91 Doug Engelbart Inventor of the mouse, windowed interface, bitmapped graphics, and much more… Advocate of direct manipulation

92 Intelligent Agents

93 Outline Agents and environments Rationality
PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

94 Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators

95 Agents and environments
The agent function maps from percept histories to actions: [f: P*  A] The agent program runs on the physical architecture to produce f agent = architecture + program

96 Vacuum-cleaner world Percepts: location and contents, e.g., [A, Dirty]
Actions: Left, Right, Suck, NoOp

97 A vacuum-cleaner agent
What is the right function? Can it be implemented in a small agent program?

98 Rational agents An agent should strive to “do the right thing”, based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful Performance measure: An objective criterion for success of an agent’s behavior E.g., performance measure of a vacuum- cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.

99 Rationality Fixed performance measure evaluates the environment sequence: one point per square cleaned up in time T? one point per clean square per time step, minus one per move? penalize for > k dirty squares? A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date

100 Rationality Rational does not mean omniscient
percepts may not supply all the relevant information Rational does not mean clairvoyant Action outcomes may not be as expected Hence, rational does not necessarily mean successful Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

101 PEAS PEAS: Performance measure, Environment, Actuators, Sensors
Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure Environment Actuators Sensors

102 Agent functions and programs
An agent is completely specified by the agent function mapping percept sequences to actions One agent function (or a small equivalence class) is rational Aim: find a way to implement the rational agent function concisely

103 Table-lookup agent Drawbacks: Huge table
Take a long time to build the table No autonomy Even with learning, need a long time to learn the table entries

104 Agent program for a vacuum-cleaner agent

105 Agent types Four basic types in order of increasing generality:
Simple reflex agents Reflex agents with state Goal-based agents Utility-based agents All these can be turned into learning agents

106 Simple reflex agents

107 Simple reflex agents function Simple-Reflex-Agent(percept) returns an action static: rules, a set of condition-action rules state := Interpret-Input(percept) rule := Rule-Match(state, rules) action := Rule-Action[rule] return action

108 Simple reflex agents

109 Model-based reflex agents

110 Model-based reflex agents
function Reflex-Agent-With-State(percept) returns an action static: state, a description of the current world state rules, a set of condition-action rules action, the most recent action, initially none state := Update-State(state, action, percept) rule := Rule-Match(state, rules) action := Rule-Action[rule] return action

111 Example

112 Model-based reflex agents

113 Goal-based agents

114 Utility-based agents

115 Learning agents

116 Summary Agents interact with environments through actuators and sensors The agent function describes what the agent does in all circumstances The performance measure evaluates the environment sequence A perfectly rational agent maximizes expected performance

117 Summary Agent programs implement (some) agent functions
PEAS descriptions define task environments Environments are categorized along several dimensions observable? deterministic? episodic? static? discrete? single-agent? Several basic agent architectures exist reflex, reflex with state, goal-based, utility- based; all agents can improve their performance through learning


Download ppt "Introduction to Artificial Intelligence – Unit 1 What is AI"

Similar presentations


Ads by Google