Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence: Definition

Similar presentations


Presentation on theme: "Artificial Intelligence: Definition"— Presentation transcript:

1 Artificial Intelligence: Definition
Lecture Notes Artificial Intelligence: Definition Dae-Won Kim School of Computer Science & Engineering Chung-Ang University

2 What are AI Systems?

3 Deep Blue defeated the world chess champion Garry Kasparov in 1997

4 During the 1991 Gulf War, US forces deployed an AI logistics planning and scheduling program that involved up to 50,000 vehicles, cargo, and people

5 Proverb solves crossword puzzles better than most humans

6 Sony’s AIBO and Honda’s ASIMO

7 Web Agents & Search engines: Google, Yahoo

8 Recognition Systems: Speech, Character, Face, Iris, Fingerprint

9 Virtual Reality and Computer Vision

10

11 Potted History of AI 1943 McCulloch & Pitts: Boolean circuit model of brain 1950 Turing’s “Computing Machinery and Intelligence” 1950s Early AI programs 1956 Dartmouth meeting: “Artificial Intelligence” adopted 1965 Robinson’s complete algorithm for logical reasoning 1966 AI discovers computational complexity Neural network research almost disappears 1969 Early development of knowledge-based systems 1980 Expert systems industry booms 1988 Expert systems industry busts: “AI Winter” 1985 Neural networks return to popularity 1988 Resurgence of probability, soft computing. 1995 Agents, agents, everywhere … with Data Mining 2000 Bioinformatics powered by Human Genome Project 2003 Human-level AI back on the agenda: challengeable

12 Some researchers consider AI as one of the four concepts:

13 1. Systems that think like humans

14 2. Systems that think rationally

15 3. Systems that act like humans

16 4. Systems that act rationally

17 AI: Acting humanly

18 Turing (1950): “The Turing Test”

19 Can machines think?

20 Can machines behave intelligently?

21

22 Turing test is The ‘Imitation’ Game

23 In 2014, something has happened.
Predicted that by 2000, a machine might have 30% chance of fooling a lay person for 5 min. In 2014, something has happened.

24 Problem: Turing test is NOT …

25 Turing test is NOT reproducible and amendable to mathematical analysis

26 AI: Thinking humanly

27 It requires scientific theories of internal activities of the brain

28 What level of abstraction? “Knowledge” or “circuits”.

29 How to validate? Requires something

30 Requires: Cognitive Science
Predicting and testing behavior of human subjects (top-down)

31 Requires: Cognitive Neuroscience
Direct identification from neurological data (bottom up)

32 Problem: Thinking humanly is NOT

33 Both are distinct from AI in CS
The available theories do not explain anything resembling human-level general intelligence.

34 AI: Thinking rationally

35 Laws of Thought: “What are correct arguments/thought processes?”
by Aristotle

36 Several Greek schools developed various forms of logic:

37 Logic: notation and rules of derivation of thoughts

38 Problem: Thinking rationally is NOT

39 Not all intelligent behavior is mediated by logical deliberation

40 AI: Acting rationally

41 Rational behavior: doing the RIGHT thing

42 The RIGHT thing: that which is expected to maximize goal achievement, given the available information

43 An agent is an entity that perceives and acts.

44 Agents include humans, robots, programs, systems, etc.

45 This course is about designing rational agents/SWs/programs/platforms.

46 Abstractly, an agent is a function from percept histories to actions
f : P  A

47 The agent program runs on the physical architecture to produce f

48 For any given class of tasks and environments, we seek the agent with the best performance.

49 Problem: Acting rationally is NOT

50 Computational limitations make perfect rationality unachievable
e.g.) NP-hard problems

51 Design best program for given machine resources

52 Which of the following can be done at present?
Play a decent game of table tennis Drive safely along a curving mountain road Drive safely along Telegraph Avenue Buy a week’s worth of groceries on the web Discover and prove a new mathematical theorem Design and execute a research program in biology Write an intentionally funny story Give legal advice in a specialized area of law Translate spoken English into Swedish in real time Perform a complex surgical operation Converse successfully with another person for an hour

53 School of Computer Science & Engineering
Artificial Intelligence Intelligent Agents Dae-Won Kim School of Computer Science & Engineering Chung-Ang University

54 The agent function maps from percept histories to actions:
f : P  A

55 A Vacuum-cleaner Agent

56 Perception: ? Actions: ?

57 Perception: location and contents [A, Dirty].
Actions: Left, Right, Suck, NoOp

58

59 Problem: A Vacuum-cleaner Agent

60 What is the right function?

61 Let’s talk about Rationality

62 A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date

63 What is performance measure?

64 1 point per square cleaned up in time T?

65 Minus 1 point per move?

66 Penalize for > k dirty squares?

67 Therefore, we can say

68 Rational  omniscient

69 Perception may not supply all information

70 Rational  clairvoyant

71 Action outcomes may not be as expected

72 Hence, rational  perfect

73 To design a rational agent, we must specify the task environment (PEAS)

74 Performance measure Environment Actuators Sensors

75 Consider the task of designing the Google driverless car

76 P: safety, comfort, profits, legality
E: streets, freeways, traffic, weather A: streering, accelerator, break S: velocity, GPS, engine sensors

77 Consider the task of designing an automated internet shopping agent:
e.g., Recommender system

78 P: price, quality, efficiency
E: WWW sites, vendors A: display to user, follow URL S: HTML, XML pages

79 Agent Types: four basic types in order of increasing generality

80 Simple reflex agents Reflex agents with state Goal-based agents Utility-based agents

81 Simple Reflex Agents 1. If a student sleeping, then assign a penalty.
2. When applied to Vehicle driving?

82 Reflex Agents with State
1. Check the student’s academic history, habits.. 2. Vehicle driving ?

83 Goal-based Agents 1. Consider goals: be a good professor in AI class
2. Vehicle driving ?

84 Utility-based Agents 1. Utility: performance measure
2. How good grade I will assign / Prof. I could be.

85 Learning Agents All agents will be turning into learning agents.


Download ppt "Artificial Intelligence: Definition"

Similar presentations


Ads by Google