First, Scale Up to the Robotic Turing Test, Then Worry About Feeling.

Slides:



Advertisements
Similar presentations
Zombies Philosophy of Mind BRENT SILBY Unlimited (UPT)
Advertisements

The Extended Mind.
Turing’s Test, Searle’s Objection
Immanuel Kant ( ) Theory of Aesthetics
A New Kind of Dualism David Banach Department of Philosophy St. Anselm College.
Anselm On the Existence of God. “Nor do I seek to understand so that I can believe, but rather I believe so that I can understand. For I believe this.
The Cogito. The Story So Far! Descartes’ search for certainty has him using extreme sceptical arguments in order to finally arrive at knowledge. He has.
Michael Lacewing Is the mind the brain? Michael Lacewing © Michael Lacewing.
LAST LECTURE. Functionalism Functionalism in philosophy of mind is the view that mental states should be identified with and differentiated in terms of.
MINDS, MACHINES AND TURING: THE INDISTINGUISHABILITY OF INDISTINGUISHABLES Harnad, S. (2001) Minds, Machines and Turing: The Indistinguishability of Indistinguishables.
A Brief History of Artificial Intelligence
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
Can a machine be conscious? (How?) Depends what we mean by “machine? man-made devices? toasters? ovens? cars? computers? today’s robots? "Almost certainly.
COMP 3009 Introduction to AI Dr Eleni Mangina
Chapter 2 The Mind-Body Problem
The Problem of Knowledge. What new information would cause you to be less certain? So when we say “I’m certain that…” what are we saying? 3 things you.
Property dualism and mental causation Michael Lacewing
Results from Meditation 2
© Michael Lacewing Dualism and the Mind-Body Identity Theory Michael Lacewing
Philosophical Foundations Chapter 26. Searle v. Dreyfus argument §Dreyfus argues that computers will never be able to simulate intelligence §Searle, on.
CSCE 315: Programming Studio Artificial Intelligence.
Philosophy of Mind Week 3: Objections to Dualism Logical Behaviorism
J. Blackmon. Could a synthetic thing be conscious?  A common intuition is: Of course not!  But the Neuron Replacement Thought Experiment might convince.
The AI Challenge: Who are we? Images Copyright Twentieth Century Fox, Paramount, Sony;
Substance dualism and mental causation Michael Lacewing
Finding our way back  The initial result of Descartes’ use of hyperbolic doubt is the recognition that at least one thing cannot be doubted, at least.
CS Artificial Intelligence  Class Syllabus.
Knowledge and Reality Lecture 2: Dualism. Dualism: what is it? Mind and body are different basic substances They have different essences The mind is essentially.
Dualism: epiphenomenalism
Chapter 2 The Mind-Body Problem McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
This week’s aims: To set clear expectations regarding homework, organisation, etc. To re-introduce the debate concerning the mind-body problem To analyse.
Human Nature 2.3 The Mind-Body Problem: How Do Mind and Body Relate?
J. Blackmon.  The Neuron Replacement Thought Experiment  Basl on Moral Status  Basl on Teleo-Interests and the Comparable Welfare Thesis  Conclusion.
Introduction to Philosophy Lecture 12 Minds and bodies #1 (Descartes) By David Kelsey.
Philosophy 4610 Philosophy of Mind Week 4: Objections to Behaviorism The Identity Theory.
Philosophy 4610 Philosophy of Mind Week 8: Can a Computer Think?
Mechanisms, Modularity and Constitutive Explanation Mechanisms and Causality in the Sciences University of Kent, Canterbury, UK Jaakko Kuorikoski.
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson) By David Kelsey.
Section 2.3 I, Robot Mind as Software McGraw-Hill © 2013 McGraw-Hill Companies. All Rights Reserved.
© Michael Lacewing Substance and Property Dualism Michael Lacewing
Reduction Nomological Reduction –1-1 relations –Many-1 relations (supervenience) Functions & mechanisms? Emergence –The problem of epiphenomenalism Attribute.
Substance dualism Michael Lacewing co.uk.
© NOKIAmind.body.PPT / / PHa page: 1 Conscious Machines and the Mind-Body Problem Dr. Pentti O A Haikonen, Principal Scientist, Cognitive Technology.
Consciousness in Human and Machine A Theory (with falsifiable predictions) Richard Loosemore.
Chapter 5: Mind and Body The Rejection of Dualism
Philosophy of Mind: Theories of self / personal identity: REVISION Body & Soul - what makes you you?
Descartes on the mind Michael Lacewing co.uk.
Descates Meditations II A starting point for reconstructing the world.
Start – Thursday, Primacy of mind, categorization, and the problem of “the Other” Two categories: I [me, my, myself,...] and Other [she, her,
Introduction to Philosophy Lecture 13 Minds and Bodies #2 (Physicalism) By David Kelsey.
The Mind-Body Problem and the Puzzle of Consciousness Minds and Machines.
Philosophy 4610 Philosophy of Mind Week 1: Introduction.
The Mind-Body Problem & What it is like to be a bat.
The Mind And Body Problem Mr. DeZilva.  Humans are characterised by the body (physical) and the mind (consciousness) These are the fundamental properties.
Personhood.  What is a person?  Why does it matter?  “Human” rights: do you have to be human to deserve human rights?  Restricted rights? Rights of.
Substance and Property Dualism Quick task: Fill in the gaps activity Quick task: Fill in the gaps activity ?v=sT41wRA67PA.
METAPHYSICS The study of the nature of reality. POPEYE STUDIES DESCARTES.
This week’s aims  To test your understanding of substance dualism through an initial assessment task  To explain and analyse the philosophical zombies.
Substance and Property Dualism
Skepticism David Hume’s Enquiry Concerning Human Understanding and John Pollock’s “Brain in a vat” Monday, September 19th.
Which of these do you agree with?
Paley’s design argument
Skepticism David Hume’s Enquiry Concerning Human Understanding
The zombie argument: responses
Introduction to Philosophy Lecture 14 Minds and Bodies #3 (Jackson)
Michael Lacewing Descartes on the mind Michael Lacewing
Recap Questions What is interactionism?
What is good / bad about this answer?
Michael Lacewing Descartes on the mind Michael Lacewing
Presented by Tim Hamilton
Presentation transcript:

First, Scale Up to the Robotic Turing Test, Then Worry About Feeling

Consciousness is feeling,

and the problem of consciousness is the problem of explaining how and why some of the functions underlying some of our performance capacities are felt rather than just functed.

But unless we are prepared to assign feeling a telekinetic power (which all evidence contradicts),

feeling cannot be assigned any causal power at all.

We cannot explain how or why we feel.

Hence the empirical target of cognitive science can only be to scale up to the robotic Turing Test.

Consciousness is Feeling Performance Biochemical robots The other-minds problem The Causal Role of Feeling The mind/matter problem Forward &Reverse Engineering Vitalism Correlation and Causation Feeling Versus Functing: How and Why Do We Feel? Telekinesis

Are models of consciousness useful for AI? Are AI systems useful for understanding consciousness? What are the theoretical foundations of machine consciousness? Is machine phenomenology possible? Will conscious systems perform better than unconscious systems? What are the implementation issues of current AI systems inspired by consciousness?

Can a machine be conscious? (How?) Depends what we mean by machine? man-made devices? toasters? ovens? cars? computers? todays robots? "Almost certainly not."

Why almost"? (1) empirical risk. (2) other-minds problem Descartes on certainty and uncertainty: Certain: (i) mathematics, (ii) my-own-mind (the "cogito"), and Uncertain: everything else other-minds problem haunts robotics

"Is any machine we have built to date conscious?" "Can a man-made artifact be conscious?" Genetic or other biological engineering? Toasters: man-made vs. tree-grown

Machine = any causal physical system Includes biological organisms Hence we are conscious machines. So the right question is : What kinds of machines can and cannot be conscious, and How? (cognitive science) (And How can we tell? The "other-minds problem.)

"forward-engineering" vs. "reverse-engineering" causal explanation How?

Explaining a cardiac system (a heart): F-eng: build a mechanism that can do what the heart can do R-eng: F-eng + explain (and/or build) the structure and the function of the biological heart itself: what the heart is (made out of) how it in particular happens to do what hearts can do. Explaining a conscious system (the brain): F-eng: build a mechanism that can do what the brain can do R-eng: F-eng + explain (and/or build) the structure and the function of the biological brain itself: what the brain is (made out of) how it in particular happens to do what brains can do. Either way, the causal explanation is a structural/functional one.

The ghost of the other-minds problem: Note: Cardiac research program is completely unproblematic. The cardiac vitalist asks: "Can a machine be cardiac?" (What kind of machine can and cannot be cardiac, and how?)

At no point would the cardiac vitalist have any basis for saying: "But how do we know that this machine is really cardiac?" There is no way left (other than ordinary empirical risk) for any difference even to be defined. Same would be true if our question had been "Can a machine be alive?" When we ask -- of the man-made, R-eng clone: "But how do we know that this machine is really alive?"

Two structurally and functionally indistinguishable systems, one natural and the other man-made, their full causal mechanism known and understood: What does it mean to ask: "But what if one of them is really alive, whereas the other is not?" What property is at issue that one has and the other lacks, when all empirical properties have already been captured? Vitalism sounds like the other-minds problem -- and may be the other-minds problem: the animism at the true heart (soul) of vitalism

The (nonanimist) vitalist who accepts that plants are not conscious would be in exactly the same untenable position if sceptical about the R-eng plant as the sceptic about the R-eng heart: What vital property is at issue (if it is not consciousness itself)? But the same is most definitely not true in the case of consciousness itself.

Forward-engineering the brain: Build an F-eng robot that passes the Turing Test: It can do everything a real human can do, for a lifetime, indistinguishably from a real human (except appearance: we will return to that). Is it really conscious? It is indistinguishable from us in everything it can do. But conscious is something I am, not something I do. In particular, it is something I feel; indeed, it is the fact that I feel.

The sceptic wants to say that F-eng is the wrong kind of machine, that it lacks something essential that we humans have. The robot does not feel, it merely behaves -- behaves exactly, indeed Turing-indistinguishably -- as if it feels, but without actually feeling a thing.

Whether or not a Turing robot feels is an ontic question rather than merely an epistemic question. Having conceded this point regarding certainty, however, only a fool argues with the Turing-Indistinguishable:

Indistinguishable? If you prick us, do we not bleed? Perhaps the sceptic about the F-eng machine should hold out for R-eng machine, made out of the right stuff, Turing-indistinguishable both inside and out at both the macro and micro levels. But would we be right to kick robots that dont bleed, because we infer that they therefore dont feel?

The Other-Minds Barrier TELEPATHY: mind-reading vs. turing-testing

"theory of mind" or "mind-reading" in animals and children. (not philosophy or parapsychology but "other-mind perception) Mind-reading is Turing-testing: inferring mental states from behavior. (Language (also a behavior) is the most powerful and direct means of mind-reading!)

The only mind we can read other than by Turing-testing is our own! Dont make ontic/epistemic conflation here: Turing-testing does not mean that all there is to mind is behavior (as the blinkered behaviorists thought)! It means the only way to read others' minds is through their behavior.

Indistinguishable? What about R-eng. and rest of the neuromolecular facts about the brain? Which facts?

"What kinds of machines can and cannot be conscious?" We know brains can be, but how? What are their relevant properties (if their weight, for example, is not)? If we pare down the properties of the brain to test which ones are and are not needed to be conscious: What will our test be? We are right back to Turing-testing again, the only non-telepathic methodology available to us, because of the other-minds problem.

Will "correlations" do instead? Brain imaging to find the areas and activities that covary with conscious states, as the necessary and sufficient conditions of consciousness? But how did we identify those correlates? Because they were correlates of behavior: and of our own feelings i.e., by Turing-testing. What is our basis for favoring R-eng over F-eng then, if they are Turing-Indistinguishable (behaviorally) and the Turing Test is our only face-valid criterion.

Conclusion: "What kinds of machines can be conscious (and how)?" The kinds that can pass the Turing Test, and by whatever means are necessary and sufficient to pass the Turing Test.

Two consolations for residual worries about Zombies passing the Turing Test: (1) Darwin is no more capable of telepathy than we are. Evolution cant distinguish sentients from zombies either. (2) the problem of explaining how (and why) we are not zombies (the mind/body problem) is hard (probably insoluble)

Why the Mind/Body Problem is hard? Causality The Scylla of TELEKINESIS: Feeling as a Primal Force The Charybdis of EPIPHENOMENALISM: Feeling as an Inert Fact

It would be easy if telekinetic powers existed: Then feelings would be physical forces like everything else. But there is no evidence that feelings are causal forces. Hence both F-eng and R-eng can only explain how it is that we can do things, not how it is that we can feel things. And that is why the ghost in the machine is destined to continue to haunt us even after all cognitive sciences empirical work is done. This paper (and linked references) is available at:

First, Scale Up to the Robotic Turing Test, Then Worry About Feeling Consciousness is feeling, and the problem of consciousness is the problem of explaining how and why some of the functions underlying some of our performance capacities are felt rather than just functed. But unless we are prepared to assign feeling a telekinetic power (which all evidence contradicts), feeling cannot be assigned any causal power at all. We cannot explain how or why we feel. Hence the empirical target of cognitive science can only be to scale up to the robotic Turing Test.