Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.

Slides:



Advertisements
Similar presentations
LECTURE 6 COGNITIVE THEORIES OF CONSCIOUSNESS
Advertisements

The Extended Mind.
Turing’s Test, Searle’s Objection
Anselm On the Existence of God. “Nor do I seek to understand so that I can believe, but rather I believe so that I can understand. For I believe this.
Meditation IV God is not a Deceiver, Truth Criterion & Problem of Error.
The ontological argument. I had the persuasion that there was absolutely nothing in the world, that there was no sky and no earth, neither minds nor.
The Chinese Room. Philosophical vs. Empirical Questions The kind of questions Turing considers concern what a machine could, in principle, do. –Could.
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Section 2.3 I, Robot Mind as Software.
Meaning Skepticism. Quine Willard Van Orman Quine Willard Van Orman Quine Word and Object (1960) Word and Object (1960) Two Dogmas of Empiricism (1951)
The Chinese Room Argument. THE LANGUAGE OF THOUGHT.
Descartes’ rationalism
Dark Rooms and Chinese Brains Philosophy of Mind BRENT SILBY Unlimited (UPT)
B&LdeJ1 Theoretical Issues in Psychology Philosophy of Science and Philosophy of Mind for Psychologists.
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
PHILOSOPHY 100 (Ted Stolze) Notes on James Rachels, Problems from Philosophy.
Chapter 10: What am I?.
SEARLE THE CHINESE ROOM ARGUMENT: MAN BECOMES COMPUTER.
Shailesh Appukuttan : M.Tech 1st Year CS344 Seminar
Artificial Intelligence u What are we claiming when we talk about AI? u How are Turing Machines important? u How can we determine whether a machine can.
CPSC 533 Philosophical Foundations of Artificial Intelligence Presented by: Arthur Fischer.
The Turing Test What Is Turing Test? A person and a computer, being separated in two rooms, answer the tester’s questions on-line. If the interrogator.
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
Acting Humanly: The Turing test (1950) “Computing machinery and intelligence”:   Can machine’s think? or Can machines behave intelligently? An operational.
Humans, Computers, and Computational Complexity J. Winters Brock Nathan Kaplan Jason Thompson.
Can Computers Think?  Guess what? We are not the first to ask this question.  Suppose you answer “yes”. Brings up other questions, e.g., should computers.
The knowledge argument Michael Lacewing
1 4 questions (Revisited) What are our underlying assumptions about intelligence? What kinds of techniques will be useful for solving AI problems? At what.
Results from Meditation 2
Doing Philosophy Philosophical theories are not primarily about facts. Therefore, there is no right or wrong. Philosophical arguments are well-argued opinions.
Artificial Intelligence
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
Functionalism Mind and Body Knowledge and Reality; Lecture 3.
Philosophy E156: Philosophy of Mind Week Five: Some Responses to Turing Concerning Machine Thinking and Machine Intelligence.
Alan Turing In 1950 asked - Can Machines Think? Turing changed that into the Turing Test “Can Computers Understand Language?” would have been.
Chapter 6: Objections to the Physical Symbol System Hypothesis.
2101INT – Principles of Intelligent Systems Lecture 2.
Turing Test and other amusements. Read this! The Actual Article by Turing.
Philosophy 1050: Introduction to Philosophy Week 10: Descartes and the Subject: The way of Ideas.
Philosophy 4610 Philosophy of Mind Week 9: AI in the Real World.
A New Artificial Intelligence 5 Kevin Warwick. Philosophy of AI II Here we will look afresh at some of the arguments Here we will look afresh at some.
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
Announcements Turn your papers into your TA. There will be a review session on Wednesday June 11 at 5-7 PM, in GIRV Final exam is Friday June 13.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
Eliminativism Philosophy of Mind Lecture 5 (Knowledge and Reality)
Are Expert Systems Really Experts? Introduction to Expert Systems Slide 1 Università di Salerno: April 2004 Are Expert Systems Really Experts? Different.
Sight Words.
Introduction to Philosophy Lecture 13 Minds and Bodies #2 (Physicalism) By David Kelsey.
From Knowing-How To Knowing-That: Acquiring Categories By Word of Mouth Stevan Harnad (UQAM & U. Southampton)
Reminder: Definition of Computation
WHAT IS A CHATTERBOT? A chatterbot is a computer program that simulates a conversation between two people. That is, one person writes something and the.
Informative Paragraph Writing 101
Computational Functionalism. Motivations A functionalist general purpose input-output device certainly sounds like a computer A functionalist general.
Artificial Intelligence Skepticism by Josh Pippin.
This week’s aims  To test your understanding of substance dualism through an initial assessment task  To explain and analyse the philosophical zombies.
Uses and Limitations Fall 2013 COMP3710 Artificial Intelligence Computing Science Thompson Rivers University.
Mind & Thought.
Functionalism Computational Role
COMP3710 Artificial Intelligence Thompson Rivers University
PHILOSOPHY 100 (Ted Stolze)
Descartes’ conceivability argument for substance dualism
Announcements Homework 6 due Friday 11:59pm.
SYMBOLIC SYSTEMS 100 Introduction to Cognitive Science
Mind-Brain Type Identity Theory
True or False: Materialism and physicalism mean the same thing.
Searle on Artificial Intelligence Minds, Brains and Science Chapter 2
COMP3710 Artificial Intelligence Thompson Rivers University
Presented by Tim Hamilton
Truth tables.
God is not a Deceiver, Truth Criterion & Problem of Error
Presentation transcript:

Bloom County on Strong AI

THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and other propositional attitudes. Schank’s “story understander” program really understands the stories.

The Gedankenexperiment l Searle, who knows no Chinese, is locked in a room with an enormous batch of Chinese script.

The Gedankenexperiment l Slips of paper with still more Chinese script come through a slot in the wall.

The Gedankenexperiment l Searle has been given a set of rules in English for correlating the Chinese script coming through with the batches of script already in the room. If ‘##$^^’, then ‘^%&*’ If ‘#&^*$#’, then

The Gedankenexperiment l Searle is instructed to push back through the slot the Chinese script with which the scripts coming in through the slot are correlated according to the rules.

The Gedankenexperiment l But Searle, remember, knows no Chinese; he identifies the scripts coming in and going out on the basis of their shapes alone. And following the rules requires only this ability.

The Gedankenexperiment l Suppose that those outside the room call the scripts going in ‘the questions’, the scripts coming out ‘the answers’, and the rules that Searle follows ‘the program’.

The Gedankenexperiment l Suppose also that the program writers get so good at writing the program and Searle gets so good at following it that Searle’s answers are indistinguishable from those of a native Chinese speaker.

The result l It seems clear that Searle nevertheless does not understand the questions or the answers; he is as ignorant as ever of Chinese. Huh?? “Neih hou ma?”

The result l But Searle is behaving just a computer does, “performing computational operations on formally specified elements” I don’t think, therefore I am not.

The result l Hence, manipulating formal symbols—which is just what a computer running a program does—is not sufficient for understanding or thinking.

On the way to the replies l ‘Understanding’—it comes in degrees, it’s vague… l Searle: But I’m a clear case of understanding English sentences, and a clear case of failing to understand Chinese ones.

The systems reply l Of course the guy in the room doesn’t understand. But the guy is just part of a larger system which does understand.

Systems reply: Searle’s 1st rejoinder l Let the guy (Searle) internalize all the elements of the system. l But the guy still doesn’t understand and neither does the system because there is nothing in the system that is not in him— the system is just a part of him.

Systems reply: a response to the 1st rejoinder l Searle’s rejoinder seems to rely on the following invalid principle: If I’m not F, and x is a part of me, then x is not F either. l But I am not less than 5lbs in weight, but my heart, which is part of me, might be.

Systems reply: Searle’s 2nd rejoinder l The idea is that, although the guy doesn’t understand Chinese, somehow the conjunction of the guy plus the rulebook, the batches of script, the slot, and the room does understand. That’s ridiculous.

Systems reply: Searle’s 2nd rejoinder l Knowing what ‘Do you want a hamburger?’ means. l One necessary condition: You have to know that ‘hamburger’ refers to hamburgers. l How on earth does the Chinese Room system come to know facts like these?

Systems reply: Searle’s 3rd rejoinder l If the system counts as cognitive simply because it has certain inputs and outputs and a program in between, then all sorts of non-cognitive systems are going to count as cognitive.

Systems reply: Searle’s 3rd rejoinder l Just an interesting result of the strong AI model? l Can’t then explain what makes the mental mental. I understand too? Yippee!

The robot reply l Put a computer in the head of a robot giving the computer “perceptual” and motor capacities— this will bestow understanding and intentionality.

Robot reply: Searle’s rejoinder l Note that the reply concedes that manipulating formal symbols does not add up to understanding. l Put the Chinese Room in the head of a robot. Still no understanding.

Robot reply: Fodor’s defense l “Given that there are the right kinds of causal linkages between the symbols that the device manipulates and things in the world, it is quite unclear that intuition rejects ascribing propositional attitudes to it.”

Robot reply: Fodor’s defense l What Searle has shown (though we already knew it) is that some kinds of causal linkages are not the right kinds. l It is a fallacy to suppose from this that no kinds of causal linkages could bestow understanding.

Robot reply: Fodor’s defense l The analogy with perception: ‘A perceives x’ is true only if there is some sort of causal relation btw x and A. l The demonstration that some causal relations are not the required sort is irrelevant.

Robot reply: Searle’s response to Fodor The symbols can be connected to the world any way you please short of intentionality being essentially involved in the connection—still no understanding or propositional attitudes.

Brain simulator reply l Design a program that simulates the actual sequence of neuron firings that occur in a native Chinese speaker when he/she is understanding a story--wouldn’t this do it?

Brain simulator reply: Searle’s rejoinders l The reply concedes something important: that we might need to know how the brain works in order to know how the mind works. But this seems like an abandonment of Strong AI. l Chinese plumbing in the Chinese room: Still no understanding.

Combination reply l Simulated brain in the head of a robot comprising a thinking system.

Combination reply: Searle’s rejoinder l We would find it “irresistable” to attribute intentionality to such an entity. But that has entirely to do with its behavior. Once we learned how the entity functioned we’d withdraw our attributions.

Other minds reply l If we refuse to attribute intentionality to the robot, won’t we have to refuse to attribute intentionality to other people? l Rejoinder: This confuses an epistemological problem with a metaphysical one.

Many mansions reply l Searle’s argument is helped along by the fact that present technology is limited. Surely some day we’ll be able to artificially reproduce intelligence and cognition

Many mansions reply: Searle’s rejoinder l This trivializes the project of Strong AI.