Presented by Tim Hamilton

Slides:



Advertisements
Similar presentations
Turing’s Test, Searle’s Objection
Advertisements

Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
LAST LECTURE. Functionalism Functionalism in philosophy of mind is the view that mental states should be identified with and differentiated in terms of.
Section 2.3 I, Robot Mind as Software.
Dark Rooms and Chinese Brains Philosophy of Mind BRENT SILBY Unlimited (UPT)
B&LdeJ1 Theoretical Issues in Psychology Philosophy of Science and Philosophy of Mind for Psychologists.
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
PHILOSOPHY 100 (Ted Stolze) Notes on James Rachels, Problems from Philosophy.
Philosophy 4610 Philosophy of Mind Week 9: Computer Thinking (continued)
Summer 2011 Monday, 07/25. Recap on Dreyfus Presents a phenomenological argument against the idea that intelligence consists in manipulating symbols according.
A Brief History of Artificial Intelligence
Chapter 10: What am I?.
SEARLE THE CHINESE ROOM ARGUMENT: MAN BECOMES COMPUTER.
Shailesh Appukuttan : M.Tech 1st Year CS344 Seminar
Artificial Intelligence u What are we claiming when we talk about AI? u How are Turing Machines important? u How can we determine whether a machine can.
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
Acting Humanly: The Turing test (1950) “Computing machinery and intelligence”:   Can machine’s think? or Can machines behave intelligently? An operational.
The Chinese Room Argument Joe Lau. Readings n Searle and Churchland’s articles in Scientific American. n Ned Block’s “Computer Model of the Mind” n Online.
COMP 3009 Introduction to AI Dr Eleni Mangina
Chapter Two The Philosophical Approach: Enduring Questions.
Doing Philosophy Philosophical theories are not primarily about facts. Therefore, there is no right or wrong. Philosophical arguments are well-argued opinions.
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
Functionalism Mind and Body Knowledge and Reality; Lecture 3.
Essay Writing in Philosophy
An Introduction to Argumentative Writing
Alan Turing In 1950 asked - Can Machines Think? Turing changed that into the Turing Test “Can Computers Understand Language?” would have been.
Chapter 6: Objections to the Physical Symbol System Hypothesis.
Turing Test and other amusements. Read this! The Actual Article by Turing.
Artificial Intelligence Introductory Lecture Jennifer J. Burg Department of Mathematics and Computer Science.
Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.
Philosophy 4610 Philosophy of Mind Week 9: AI in the Real World.
1 Artificial Intelligence Introduction. 2 What is AI? Various definitions: Building intelligent entities. Getting computers to do tasks which require.
A New Artificial Intelligence 5 Kevin Warwick. Philosophy of AI II Here we will look afresh at some of the arguments Here we will look afresh at some.
The Free-Will Problem Appendix to Chapter 9 TOK II.
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson) By David Kelsey.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
Roger Penrose’s Argument Against Though Computation.
Elements & Atoms. Models of the Atom Why do scientists used models? Scientists make models because reality is complex. It is useful to use a model because.
Psychology: Chapter 1, Section 1
Are Expert Systems Really Experts? Introduction to Expert Systems Slide 1 Università di Salerno: April 2004 Are Expert Systems Really Experts? Different.
Eliminative materialism
The Chinese Room Argument Part II Joe Lau Philosophy HKU.
EECS 690 April 2.
Substance and Property Dualism Quick task: Fill in the gaps activity Quick task: Fill in the gaps activity ?v=sT41wRA67PA.
Artificial Intelligence Skepticism by Josh Pippin.
This week’s aims  To test your understanding of substance dualism through an initial assessment task  To explain and analyse the philosophical zombies.
Chapter 2 Section 1 Conducting Research Obj: List and explain the steps scientists follow in conducting scientific research.
Emotion. ● A working definition: a reaction or response related to sense perceptions, internal states, thoughts, or beliefs about things or people, real.
Features of science revision
AICE psychology as level Big ideas
IS PSYCHOLOGY A SCIENCE?
The Scientific Status of Psychology
Could you be friends with a robot?
Chapter 7 Psychology: Memory.
PHILOSOPHY 100 (Ted Stolze)
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson)
IS Psychology A Science?
Rationalism versus Empiricism
Mind-Brain Type Identity Theory
Recap of Aristotle So Far…
IS Psychology A Science?
Recap of Aristotle So Far…
Overview Homework card/checklist.
Artificial Intelligence Lecture 2: Foundation of Artificial Intelligence By: Nur Uddin, Ph.D.
THE NATURE OF SCIENCE.
Cognitive Level of Analysis: Cognitive Processes
Searle on Artificial Intelligence Minds, Brains and Science Chapter 2
In Defense of Physicalist Accounts of Consciousness
The Nature of Science.
Development and Language
Presentation transcript:

Presented by Tim Hamilton Commentary on Searle Presented by Tim Hamilton

Robert P. Abelson Dept. of Psychology, Yale The act of writing the rules for symbol manipulation is a feat itself worth praising. Our own learning ability comes from processing rules (addition, money, etc.), and it is assumed that as more rules are learned, our understanding increases. According to Searle’s argument, someone does not really learn something without actually doing it personally.

Abelson, cont. It is very common for humans to produce linguistic interchange in areas that we have no idea what we’re talking about. Should we give a computer some credit when it performs as well as we do? Programs lacking sensorimotor input may miss things, but why is intentionality so important?

Abelson, cont. Abelson says “Intentionality for knowledge is the appreciation of the conditions for its falsification” Psychologists cannot even answer the question of how we determine what to do when beliefs and facts do not agree, how can we expect a computer to do the same? Conclusion: AI is too young a discipline for objections to a lack of intentionality to be convincing.

Ned Block Dept. of Linguistics and Philosophy, MIT Searle’s arguments depend on intuition, and in the presence of enough evidence intuition must be ignored. Examples: The earth moves through space at over 100,000 kph. A grapefruit sized chunk of grey organic matter is the seat of mentality. Searle’s arguments against AI are really against the view of cognition of formal symbol manipulation. Before we can reject this view, we must be presented with the evidence for that view, in order to decide if our intuition is valid.

Block, cont. A machine that manipulates descriptions of understand does not itself understand. This still does not harm the theory of understanding as symbol manipulation. Cognitive psychology tries to decompose all mental functions into symbol manipulation processes which are indivisible to the point where the internal “primitive” function is simply “a matter of hardware.”

Block, cont. Instead of one man trapped in a room manipulating Chinese symbols, what if there was an army, each of them performing one single primitive, and able to communicate with each other. The “cognitive homunculi head.” Is this network thinking? The molecules in our bodies are slowly exchanged with our environment over time, what if we lived in a place where molecules were really tiny vehicles inhabited by beings smaller than sub-atomic particles? Does this affect our ability to think and understand? Would we now lack intentionality?

Block, cont. Intuitions about mentality are influenced by what we believe, so Searle needs to show that his intuition that the cognitive homunculi head lacks intentionality is not due to beliefs against symbol manipulation as cogitation. The source of the intuition is an important component of a proper argument.

Daniel Dennet Center for the Advanced Study in the Behavior Sciences, Stanford AI of the time is “bedridden” with the only mode of perception and action being linguistic. The AI community is aware of the shortcomings of this model Searle’s rebuttal to the “systems reply” is that a person (a whole system) understands language while the portion of their brain which processes language does not.

Dennet, cont. Searle’s example person who internalizes an entire symbol manipulation system would eventually learn and understand Chinese simply by noticing what their own actions are based on different Chinese inputs. Searle’s insistence on the presence of intentionality begs two questions: What does the brain produce? What is the brain for?

Dennet, cont. Searle says the brain produces intentionality, while AI and others would say that the brain produces control. Searle admits that a machine could produce control without intentionality, so what then is the use of intentionality if our exact actions can be produced without it?

Roger C. Schank Dept. of Computer Science, Yale Agrees with Searle that the programs he has written do not think. Disagrees that computer programs will never understand and never explain human abilities. Some theories employed by AI (scripts) later tested on human subjects have been shown to be accurate descriptions of human ability.

Schank, cont. Complex theories of understanding must be explained by computer programs, not in English. Can a model of understanding tell us anything about understanding itself? This is relevant to both AI and psychology. Schank argues that it is just as impossible for biology to explain what starts life as it is for psychology to explain what causes understanding.

Schank, cont. A model (robot) could be built that functions exactly as if it were alive. Is it? Are programs that function as if they understand, understanding? Schank himself says no, but then asks “Does the brain understand?” Humans themselves understand, but do the biochemical processes within their grey matter understand?

Schank, cont. Schank agrees that something that simply uses rules does not understand. The hardware implementation, whether biological or mechanical, does not understand. It is the person who writes the rules, the AI researcher, who understands. This suggests that perhaps there is some sort of passing-on of understanding taking place.