Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Lecture 41 of 42 Wednesday, 10 December.

Slides:



Advertisements
Similar presentations
First, Scale Up to the Robotic Turing Test, Then Worry About Feeling.
Advertisements

The Extended Mind.
Turing’s Test, Searle’s Objection
What is it like to be me? Trying to understand consciousness.
Lecture 20 Theories of Consciousness, Consciousness and the Mind-Body Problem.
LAST LECTURE. Functionalism Functionalism in philosophy of mind is the view that mental states should be identified with and differentiated in terms of.
Summer 2011 Wednesday, 07/06. Mental vs. Physical Items Write down 3 examples of mental items (anything that you consider to be a part of the mind) and.
Section 2.3 I, Robot Mind as Software.
Meaning Skepticism. Quine Willard Van Orman Quine Willard Van Orman Quine Word and Object (1960) Word and Object (1960) Two Dogmas of Empiricism (1951)
Dark Rooms and Chinese Brains Philosophy of Mind BRENT SILBY Unlimited (UPT)
John Coleman DACE LWP How to reach Functionalism in 4 choices (and 639 words) Pack your baggage – mine includes Pack your baggage – mine includes.
B&LdeJ1 Theoretical Issues in Psychology Philosophy of Science and Philosophy of Mind for Psychologists.
Artificial intelligence. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10.
© Michael Lacewing Behaviourism and the problem of other minds Michael Lacewing
Philosophy 4610 Philosophy of Mind Week 9: Computer Thinking (continued)
Summer 2011 Monday, 07/25. Recap on Dreyfus Presents a phenomenological argument against the idea that intelligence consists in manipulating symbols according.
Summer 2011 Tuesday, 07/05. Dualism The view that the mind is separate from the physical/material world. Tells us what the mind is not, but is silent.
Chapter 10: What am I?.
SEARLE THE CHINESE ROOM ARGUMENT: MAN BECOMES COMPUTER.
Mind and Body I Bodies and Ghosts, Qualia, and Mind-Brain identity.
Shailesh Appukuttan : M.Tech 1st Year CS344 Seminar
The Turing Test What Is Turing Test? A person and a computer, being separated in two rooms, answer the tester’s questions on-line. If the interrogator.
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
The knowledge argument Michael Lacewing
Chapter Two The Philosophical Approach: Enduring Questions.
Property dualism and mental causation Michael Lacewing
© Michael Lacewing Dualism and the Mind-Body Identity Theory Michael Lacewing
The Mind-Body Problem. Some Theories of Mind Dualism –Substance Dualism: mind and body are differerent substances. Mind is unextended and not subject.
Logical behaviourism: objections
Philosophy 4610 Philosophy of Mind Week 5: Functionalism.
© Michael Lacewing Functionalism and the Mind- Body Problem Michael Lacewing
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
Functionalism Mind and Body Knowledge and Reality; Lecture 3.
Michael Lacewing Logical behaviourism Michael Lacewing
Philosophy of Mind Week 3: Objections to Dualism Logical Behaviorism
Life and Death Philosophical Perspectives. Two problems To discuss whether life after death is possible we need to understand two related philosophical.
Why does your view of human Nature Matter?
The AI Challenge: Who are we? Images Copyright Twentieth Century Fox, Paramount, Sony;
Stare at center of left frame for 1 min., then at right.
Finding our way back  The initial result of Descartes’ use of hyperbolic doubt is the recognition that at least one thing cannot be doubted, at least.
Bloom County on Strong AI THE CHINESE ROOM l Searle’s target: “Strong AI” An appropriately programmed computer is a mind—capable of understanding and.
CONSCIOUSNESS Frank Jackson, ‘Epiphenomenal Qualia’
Dualism: epiphenomenalism
This week’s aims: To set clear expectations regarding homework, organisation, etc. To re-introduce the debate concerning the mind-body problem To analyse.
Human Nature 2.3 The Mind-Body Problem: How Do Mind and Body Relate?
Materialism: Minds and Machines
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
Philosophy 4610 Philosophy of Mind Week 4: Objections to Behaviorism The Identity Theory.
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson) By David Kelsey.
All my course outlines and PowerPoint slides can be downloaded from:
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
© Michael Lacewing Substance and Property Dualism Michael Lacewing
Reduction Nomological Reduction –1-1 relations –Many-1 relations (supervenience) Functions & mechanisms? Emergence –The problem of epiphenomenalism Attribute.
Chapter 5: Mind and Body The Rejection of Dualism
Philosophy of Mind materialism.
Eliminative materialism
Introduction to Philosophy Lecture 13 Minds and Bodies #2 (Physicalism) By David Kelsey.
The Mind And Body Problem Mr. DeZilva.  Humans are characterised by the body (physical) and the mind (consciousness) These are the fundamental properties.
Criticisms of Dualism. Descartes argument for dualism I can clearly and distinctly conceive of the mind without the body and the body without the mind.
Substance and Property Dualism Quick task: Fill in the gaps activity Quick task: Fill in the gaps activity ?v=sT41wRA67PA.
This week’s aims: To set two SMART targets based on formal assessment feedback and progress so far To understand basic ideas concerning each key theory.
Philosophical behaviourism: two objections
Ryle’s philosophical behaviourism
Substance and Property Dualism
The Mind-Body Problem.
Property dualism: objections
The Problem of Consciousness
Mind-Brain Type Identity Theory
Get Yourself Thinking…
What did I google to find this picture?
Presentation transcript:

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Lecture 41 of 42 Wednesday, 10 December 2008 William H. Hsu Department of Computing and Information Sciences, KSU KSOL course page: Course web site: Instructor home page: Reading for Next Class: Chapters 1-14, 22 – 23, 26, Russell & Norvig 2 nd edition Philosophy of Mind Discussion: Final Exam Review

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence PHILOSOPHY OF MIND © 2006 Hilary Greaves

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The Two Central Problems in the Philosophy of Mind The Problem of Other Minds  How can I know that other minds exist? Vs. Thoughts Feelings Sensory experiences etc.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The Problem of Other Minds (cont’d)  How can I know what is going on in other minds? Afraid Looking forward to next summer’s holiday Vs.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The Mind – Body Problem How are minds and their contents related to the physical, chemical & biological world? MIN D

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Two (apparently unique) aspects of mental phenomena Consciousness  Your mind is conscious. But as far as we know, ordinary bits of physical matter are not conscious.  “What is consciousness?”  You know! CONSCIOUS -NESS

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Intentionality (or aboutness) Mental states can be about other things in the world.  Your thought that Bush is a jerk is about George Bush. Bush is a jerk aboutness

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Intentionality/aboutness Ordinary physical objects aren’t about anything.  The desk is not about the chair. aboutness

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Intentionality/aboutness (A painting can be about something... aboutness

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Intentionality/aboutness... But that’s only because it was painted by someone who was thinking about the thing he was painting.) aboutness

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Terminological note ‘Intentionality’ has no special connection to intentions.  Bush’s knowledge that Saddam Hussein is alive is about Saddam.  Tom’s fear of the dentist is about dentists.  My desire that Australia will be fun is about Australia.  Your intention to eat dinner this evening is about your dinner. Mental states that have ‘aboutness’ (=‘intentionality’)

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Intentionality/aboutness The challenge: an adequate explanation of what minds are should explain how mental goings-on can be about things, since ordinary physical goings-on do not have this feature.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Why explaining ‘aboutness’ is difficult thoughts about things that exist  There's no such thing as 'Intentional string'.  The asymmetry problem: Aboutness can't be resemblance Bush is a jerk aboutness

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence An attempt to explain what ‘aboutness’ is My thought is about Bush = my thought was ‘caused in the right sort of way’ by Bush. Bush is a jerk aboutness Causal chain

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Why the causal account doesn’t work We can also think about things that don’t exist. It would be fun to ride a unicorn. Causal chain??

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Cartesian Dualism Minds and Matter are two fundamentally different kinds of “substances” Thoughts Feelings Sensory experiences etc. Physical stuff (matter) Mental stuff (minds) Consciousness Intentionality Thought

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems with Cartesian Dualism (I) It's mysterious what the "mind substance" is.  What it’s not:  Not made of matter  Not located in space  Not ‘extended’ ...

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems with Cartesian Dualism (II) It’s a mystery how minds can interact causally with things made of matter.  Mental causing physical: I’m going to raise my arm. Causes Mental substance (mind) Matter

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems with Cartesian Dualism (II)  Physical causing mental: Sound experienc e sound waves Causes

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems with Cartesian Dualism (III) Are dualism and gradual evolution consistent? ? ?

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems with Cartesian Dualism (IV) It makes the Other Minds problem very hard to solve. Vs. Thoughts Feelings Sensory experiences etc.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence “Materialist” solutions to the Mind – Body Problem Materialism: The claim that everything in the universe is made up of matter.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The advance of materialism The advance of science has made materialism look very plausible to many philosophers & scientists.  Astronomy: The heavenly bodies are made of matter and obey the laws of physics. earth “heavens”

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The advance of materialism  Evolutionary biology: No non-material processes or forces (e.g. God) are needed to explain the design in the biological world. Spooky stuff?

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The advance of materialism  Advances in physiology & understanding the genetic code: There is no need for a “life force” ("elan vital"). “life force” Live tiger Dead tiger

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The advance of materialism Advances in understanding the way the brain works – helped by imaging technology – makes it increasingly plausible to believe that mental phenomena like thought and consciousness might be explained in terms of brain processes Spooky stuff??

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The “Reductive Materialism” hypothesis The “Reductive Materialism” hypothesis:  social sciences can be reduced to (i.e. explained by appeal to) psychology  psychology can be reduced to biology  biology can be reduced to chemistry  chemistry can be reduced to physics

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Behaviorism The way we tell that (e.g.) you're in pain is by observing your behavior. The inspiration for behaviorism: maybe what it means to say you're in pain also involves your behavior.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence ‘Behaviorism’ in psychology Methodological (or: psychological) behaviorists included B. F. Skinner  Methodological behaviorism: If there is anything more to mental states than dispositions to behave in certain ways, that 'extra bit' has nothing to do with science. I am feeling angry

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence ‘Behaviorism’ in philosophy Analytical (or: philosophical) behaviorists included Ludwig Wittgenstein and Gilbert Ryle: there is nothing more. I am feeling angry EQUALS

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Analytical behaviorism Analytical behaviorism: Claims about mental states and processes can be “translated” into claims about patterns of behavior. Example :  “Tom has a toothache” = “Tom moans; Tom says his tooth hurts; if someone touches Tom’s tooth, Tom screams; etc."

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Analytical behaviorism “Jenny is hungry” = ?? “Jason wishes he could quit school” = ??

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Behaviorism and the Problem of Other Minds I am feeling angry EQUALS

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems for behaviorism (I): Undefinable mental states Some mental states don't seem to be definable in this way.  Listening to Bob Dylan  Thinking about how big the universe is

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems for behaviorism (II): Circular definitions Behaviorist “definitions” of mental state terms all turned out to be circular (or just plain wrong). ‘Tom believes it will rain today’ = ‘Tom will either stay at home or drive to school today’ IF Tom wants to stay dry, and Tom doesn’t believe there’s a shelter at the bus stop, and...

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems for behaviorism (II): Circular definitions “James believes the exam will be hard” =....??

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems for behaviorism (III): The 'inverted spectrum' problem It seems possible for one person to have their spectrum of color experiences 'inverted' relative to another's. But according to behaviorism, this is not possible. The flower looks yellow

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems for behaviorism (IV): Some wrong predictions For behaviorists, two people who behaved in just the same way would have the same mental states. But there are cases in which this is clearly crazy.  Dennett’s thought experiment: curare plus “amnestic” “How did it feel?” Administer general anaesthetic operation

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problems for behaviorism (IV): Some wrong predictions “Curare”: paralyses all voluntary muscles “Amnestic”: Has no effect until 2 hours after ingestion, whereupon it wipes out memory of those two hours “How did it feel?” Administer curare + amnestic operation

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence... The problem (an argument against behaviorism): (P1) Behaviorism predicts that the patient who is given general anaesthetic has the same experiences as the patient who is given curare + amnestic. (P2) But that’s wrong! One is unconscious, and the other is in excruciating pain. Therefore, (C) Behaviorism is false.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The (Type-Type) Identity Theory The type – token distinction the Greaves's belief that snow is white & your belief that snow is white. Snow is white

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The (Type-Type) Identity Theory The type-type identity theory: Mental state types are identical with brain state types Examples of mental state types:  the belief that snow is white  a burning pain in the index finger  the thought that 17 is a prime number Example of a type-type identification:  "Pain is c-fibers firing" pain Same event (type)

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence How the theory deals with the Other Minds Problem..??

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Problem for the (Type) Identity Theory: “Chauvinism” about the mental According to type-type identity theory:  Animals with brains significantly different from ours can’t feel pain or have other mental states  If there are organisms in the universe whose chemical composition is different from ours, they can’t feel pain or pleasure, and they can’t think.  Extra-terrestrials can’t even think about math. pain 7x5=35

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Functionalism (aka “machine functionalism”) The emergence of computers and the computer model of the mind.  Computers are symbol manipulators  Programs specify how the symbols are to be manipulated. ADDING PROGRAM 2 3 5

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Programs and physical devices Many different sorts of physical devices can run the same program...

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Minds and programs Minds are to brains as programs are to computers  “The mind is the brain’s program” Inputs (mouse clicks, keyboard strokes) Outputs (screen displays; printouts) Inputs (light rays; hammers hitting thumb) Outputs (body movements; screams; sentences spoken) PROGRAM

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence ‘Functional states’ Mental state concepts are functional concepts; mental states are functional states (not physical states) ‘Functional states’ for the adding program: Input 2Input 3 Output 5 Computer state: ‘remember that a 2 has been input, and get ready to add a 2 nd number’ Computer state: ‘ready to do an add calculation’ Functional states

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence ‘Functional states’ cont’d Desire to stay dry Belief that there is no shelter at the bus stop Input: See that it’s raining Belief that it’s raining Output: Look for car keys Functional states

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence What is... The functionalist solution to the Other Minds Problem? The functionalist solution to the chauvinism problem? pain ? ?

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Strong AI A radical (?) implication of functionalism: “Strong AI” -- It is possible to build artificial minds with real mental states.  A computer running the same program that your brain is running would have the same mental states that you have.  It would be conscious, and thus feel pains and pleasures, have emotions, etc.  It would have thoughts with real intentionality.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Can machines think? If they can, then the fact that functionalism predicts that they can counts in favor of the functionalist theory. If not, it counts as an objection to functionalism. The Turing test – if a machine passes the Turing test, we cannot tell that it isn't really thinking  It is a further step to say that if a machine passes the Turing test, then it is thinking. But perhaps (?) this extra step is very plausible. “What do you think about Saddam’s trial?” “I don’t normally approve of the death penalty, but this guy deserves everything he gets.”

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Searle’s Critique of Functionalism and Strong AI Roger Schank’s project: Getting a computer to understand stories and answer questions about them the way people do. A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burnt to a crisp, and the man stormed out of the restaurant angrily, without paying for the burger or leaving a tip. Did the man eat the hamburger? No

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence What Schank’s computer is doing It’s manipulating symbols. Does this mean that it understands what the symbols mean (i.e. understands the story, and understands its replies to the questions)? Thinking of a restaurant scene aboutness

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Searle’s argument Understanding is an ‘intentional state’ – when you understand a story, you understand what it is about. Searle’s going to argue that the computer could be manipulating symbols in all the right ways, without understanding the story (= without having ‘intentionality’). The "Chinese room argument": An argument that passing the Turing test is not sufficient or thinking.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence The setup of the "Chinese room” Input:Output:

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Searle’s argument (P1) Neither Searle nor any other part of the Chinese Room really understands Chinese. Therefore, (C1) The Chinese Room [i.e. the system] does not understand Chinese. (From (P1)) (P2) But the Chinese Room perfectly simulates someone who does understand Chinese. Therefore, (C2) Simulating understanding is not sufficient for having understanding. (From (P2), (C1)) Therefore, (C3) Even if Schank's computer perfectly simulates human understanding of stories, it does not follow that Schank's computer really understands stories.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Searle’s Account of Intentionality It is a “causal product” of the right kind of biological system. ADDING PROGRAM adding Not a “causal product” of the symbol manipulation

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Searle’s Account of Intentionality Pick up brick and throw at window Window breaks Causal product of brick being thrown

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Searle’s Account of Intentionality  It cannot be created simply by symbol manipulation. Searle makes the same claims for consciousness. intentionalit y ???

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence A Problem for Searle’s View Either intentionality and consciousness are restricted to brains like ours  in which case he is committed to chauvinism Or brains quite different from ours can also produce intentionality & consciousness  in which case the Other Minds Problem looks to be unsolvable since we can’t tell which brains just simulate consciousness & intentionality & which really have it.

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Some questions for YOU to ponder Can a suitably sophisticated computer which is NOT made out of “meat” like the human brain  have real intentionality (and thus real thoughts)?  have real consciousness (feel real pain & pleasure, and know what it is like to experience colors & tastes)? Does it matter? If so, why?

Computing & Information Sciences Kansas State University Wednesday, 10 Dec 2008CIS 530 / 730: Artificial Intelligence Some questions for YOU to ponder Can an extra-terrestrial with a suitably sophisticated brain that is very different from our brain  have real intentionality (and thus real thoughts)?  have real consciousness (feel real pain & pleasure, and know what it is like to experience colors & tastes)? How can we KNOW whether the computer or the extra-terrestrial has REAL consciousness and REAL intentionality?