Presentation is loading. Please wait.

Presentation is loading. Please wait.

Counterfactual computational vehicles of consciousness Ron Chrisley COGS/Dept. of Informatics University of Sussex Toward a Science of Consciousness, Tucson.

Similar presentations


Presentation on theme: "Counterfactual computational vehicles of consciousness Ron Chrisley COGS/Dept. of Informatics University of Sussex Toward a Science of Consciousness, Tucson."— Presentation transcript:

1 Counterfactual computational vehicles of consciousness Ron Chrisley COGS/Dept. of Informatics University of Sussex Toward a Science of Consciousness, Tucson April 7th 2006

2 Outline Bishop's argument against computational explanations of consciousness My response: Acknowledge the counterfactual nature of physical states Segue: Use emphasis on counterfactual properties as motivation for a specific form of computationalism/representationalism Provides a plausible yet non-trivial enactivist model of perceptual experience: Imagination- Based Architecture

3 Bishop: "Dancing with Pixies" Poses a dilemma for computational explanations of consciousness Horns based on two notions of computation: 1.Non- or weakly- causal construal of computation 2.Strong, counterfactual causal construal of computation

4 Bishop's dilemma Either notion has problems: Horn 1: Weak causality: –Implies every computation is realised in every physical system –So any claim that a given computation is sufficient for consciousness implies panpsychism –Phenomenal "pixies" everywhere! Horn 2: Strong causality: –Violates naturalism by appealing to non-physical aspects of a state

5 Rejecting the first horn Yes, weakly causal construal of computation implies panpsychism But: –What's wrong with panpsychism anyway? Actually, a lot… –Better (cf Chalmers 94, 96; Chrisley 94): Weak construal not really a causal construal of computation at all Thus does not capture what is meant by computation

6 Embracing the second horn Strong, counterfactual causal construal of computation –Identity of a computational state depends not only on actual causal relations… –…but also on causal effects (output, successor state) a state would have had were different input received Bishop: Subject to variants of Chalmers' Fading Qualia and Suddenly Disappearing Qualia arguments

7 Bishop's thought experiment Consider the operation of two robots: –R1: "controlled by a program replicating the fine-grained functional organisation of a system known to have phenomenal states" A particular run of R1 with input I results in an actual sequence of behaviours B –R2: any open physical system that generates B, given the same input I

8 More on R1 and R2 For example, R1 might be controlled by an AI program that enables it to output classifications of objects presented to its cameras –When given the input of a particular object, this results in a particular sequence of output classifications B: –"This colour of this triangle is a bit more red than the square I just saw" While R2 can just have a hard- wired circuit that happens to output B (regardless of input!)

9 Branching FSA We can conceive of R1 and R2 as a finite-state automata with branching and non-branching states, respectively:

10 Non-branching FSA (Diagrams from Bishop 2002)

11 The computationalist's view Bishop: "Hence, although the external behaviour of the two systems over the time interval is identical [viz, B], for [a computational theory of consciousness], only R1 would experience genuine phenomenal states." What's wrong with that?

12 Transforming R1 into R2 One by one, delete one of the N state transition sequences of R1 that are not actually used in the case under consideration, to transform R1 into R1 1, R1 1 to R1 2 … to R1 N R1 N will be computationally formally identical with R2 So for a computationalist, R1 N, like R2, has no conscious experience

13 R1, R2 & another dilemma Bishop: "What happens to the phenomenological experience of R1 as it incrementally undergoes the above transformation?" "Either its experience of phenomenal states must gradually fade (Fading Qualia) or it must switch abruptly at some point (Suddenly Disappearing Qualia)." Me: Not necessarily: There might be several, spaced, discrete transitions. But let that pass…

14 No SDQ? Bishop: Rule out first horn: Suddenly Disappearing Qualia It would "imply that the removal of one such privileged branching state transition instruction would result in the complete loss of the robots phenomenal experience" Me: Not convinced this is a problem But agree to rule it out for the sake of argument

15 A general argument? Bishop presents an argument not against the second horn (Fading Qualia), but against computationalism in general: The computationalist's position implies the existence of "a system, whose phenomenal experience is contingent upon non-physical interactions with sections of its control program that are not executed – a form of dualism." "Hence, if phenomenal states are purely physical phenomena, the phenomenal experience of the two robot systems, R1 and R2, must be the same."

16 Warning sign: Too strong An indication that Bishop's argument can't be right: –It proves too much If right, it would imply that there could never be a physicalist computational explanation of anything… –…not even computers!

17 Misunderstanding the physical Bishop's main mistake: claiming that differences in counterfactual behaviour do not constitute physical differences Presumably, it is by virtue of some physical difference between a state of R1n and the corresponding state of R1n+1 that gives the former a counterfactual property the latter lacks

18 Misunderstanding the physical Note that to delete the nth transition, one would have to physically alter R1 n-1 So despite Bishop's claim, if R1 and R2 differ in their counterfactual formal properties, they must differ in their physical properties Causal properties (even counterfactual ones) supervene on physical properties

19 Counterfactuals are key So much for Bishop's argument against computational accounts of consciousness But although Bishop has nothing on computationalism in theory, he inspires a relevant critique of the form it usually takes That is, standard computationalist theories of consciousness neglect the importance of counterfactual properties

20 Segue…

21 "Actualist" computationalism Typically, computationalist (or functionalist) theories attempt to map: –A perceptual phenomenal content –To a computational (functional) state –By virtue of the latter's actual causal origins (and perhaps its actual causal effects)

22 The Grand Illusion? For example, some argue: –Change blindness data show that only foveal information has an effect on our perceptual state –Thus, our perceptual experience is only of the foveated world –Any appearance that anything else is experienced is incorrect

23 Being counterfactual But a computationalist theory that places explicit emphasis on the role of counterfactual states can avoid the Grand Illusion result E.g.: The phenomenological state corresponding to a given computational state includes not just current foveal input But also the foveal input the computational system would expect to have if it were to engage in certain kinds of movement "Imagination-based architecture" (IBA)

24 More on IBA These expectations can be realized in, e.g., a forward model, such as a feed-forward neural network The model is updated only in response to foveal information –E.g., it learns: "If I were to move my eyes back there, I would see that (the current foveal content)"

25 The IBA explanation Thus, change blindness can be explained without denying peripheral experience Consider the system after an element of the scene has changed, but before the system foveates on that part of the scene The expectations of the forward model for what would be seen if one were to, say, foveate on that area, have not been updated

26 No Grand Illusion According to IBA, the (outdated) expectation is a part of current experience Thus no change is detected or experienced So our experience is just what it seems

27 Elaborations to IBA Only a simplistic version of IBA presented here Can be elaborated to include change not instigated by the system itself –E.g., expectations of what foveal information one would receive if the world were to change in a particular way

28 More elaborations to IBA Weighted contributions to experience –Current foveal info strongest of all –Expected foveal after a simple movement a little less –Contribution of expected results of complex movements/sequences inversely proportional to their complexity

29 Open questions for IBA E.g., what is the experience at a non- foveated part of the visual field if one has different expectations for what one would see depending on the motor "route" one takes to foveate there? –Some "average" of the different expectations? –Winner take all? –Necker-like shift between top n winners? –No experience at that part of field at all, as coherence (systematicity, agreement) at a time is a requirement for perceptual experience?

30 Announcements For philosophers of Cognitive Science/AI: –My department, Informatics at the University of Sussex/COGS is hiring –Tell your friends/colleagues! Those interested in Machine Consciouness: –Conference I am chairing, BICS 2006 in the Greek Islands, October, is still accepting submissions to the end of April Email me about either/both: ronc@sussex.ac.uk BICS

31 Thank you! Thanks to Mark Bishop and Rob Clowes for helpful discussions


Download ppt "Counterfactual computational vehicles of consciousness Ron Chrisley COGS/Dept. of Informatics University of Sussex Toward a Science of Consciousness, Tucson."

Similar presentations


Ads by Google