Presentation is loading. Please wait.

Presentation is loading. Please wait.

Consciousness in Human and Machine A Theory (with falsifiable predictions) Richard Loosemore.

Similar presentations


Presentation on theme: "Consciousness in Human and Machine A Theory (with falsifiable predictions) Richard Loosemore."— Presentation transcript:

1 Consciousness in Human and Machine A Theory (with falsifiable predictions) Richard Loosemore

2 Why bother with consciousness? People are concerned about motivations, emotions and consciousness in AGI systems. I am a system builder, but my paramount goal is to develop Safe and Friendly AI. This is part of a research effort to deal with AGI motivation (i.e. friendliness). Because the three issues are so strongly linked, I also deal with emotions and (in this paper) consciousness.

3 The “Hard” Problem Chalmers, 1996: There is much confusion about the question of “consciousness” (…which still continues, even today…) When you cut through all the confusion, there is just one core philosophical problem that is “hard.” Example: is the person sitting next to you a zombie? (where ‘zombie’ means something that shows all the usual behavior, but with no subjective experience happening on the inside). The zombie-human difference begs to be explained … and this is the “hard” problem.

4 The Problem That Cannot Be Named However, even within the hard problem there is confusion, because philosophers cannot say what exactly is the thing that needs to be explained (it is subjective, so we cannot get a handle on it). Normally in science we can at least define the thing we want to explain … in this case we cannot even get to first base and define the pesky thing properly. Rather than take this as a weakness, I will use the fact that consciousness is so difficult to define as a critical clue to unlock the mystery.

5 The Analysis Mechanism Humans (and AGIs) must have an “analysis mechanism” that is triggered when the system thinks about the definition or meaning of a concept. So the AM takes a concept-atom [x] and operates on it to answer the question “What is [x]?” (In practice, the AM is a large, dynamic cluster of mechanisms, not just one thing). Notice: the AM follows links to precursor concepts and uses those to build an explanation. But what happens if the concept is primitive (e.g. [red]) and does not have any links to precursors?

6 Failure Is Not An Option I propose that when the Analysis Mechanism hits a primitive like [red], it cannot return nothing. Instead, it has to return something that is a kind of neutral placeholder for the concept [the explanation of what ‘redness’ is]. Because something (rather than nothing) gets returned, the system is effectively saying to itself “there is definitely something it is like to be red” … but this has no links, so it is unanalyzable. I suggest that this same “failure” of the analysis mechanism is common to all of the consciousness questions. The AM hits a dead end in each case.

7 Missing the point? Most common objection: this argument misses the point, because it explains only the locutions that people generate when they talk about consciousness, not their actual experiences (Chalmers, personal communication). This is where the proposed theory is unique: I propose that if we try to understand what happens when we think this objection (i.e. we look at what was going on in Chalmers’ mind when he stated the objection), we will find that the objection-thought itself involves a call to the analysis mechanism that, according to the hypothesis, is causing the trouble. So: does the proposed theory seem to miss the point and say nothing about the essence of subjective experience? Good! It should seem unsatisfactory! The fundamental unsatisfactoriness of the theory is what the theory must predict.

8 Interpreting the Theory This looks like I am trying to “explain away” consciousness as a kind of artifact. I don’t buy this interpretation. It would be just as valid to conclude that subjective concepts (like qualia) are as “real” as any other concept, but that they have a unique property of being mysterious. Why call them “real”? Because the standards that the intelligent system uses in its naïve assessments of “realness” are always grounded on the fact that primitive concepts like the color qualia have an axiomatic “most real of all” status. Qualia concepts are the most immediate, most real things that the system knows about, and the realness of all other concepts is judged against that standard. In the end, there is no meaningful distinction between the above two interpretations of this theory. Consciousness-concepts are neither artifacts nor real, they are in a unique class: they are real, but beyond the reach of scientific explanation.

9 Falsifiable Predictions Note: In each case, the prediction is that these phenomena will occur at exactly the boundary where the AM reaches the limit of its scope, but that the effect will disappear inside that boundary. Prediction 1: Blindsight. Some of the visual mechanisms will be found to lie outside the scope of the analysis mechanism, and the ones outside will be precisely those that, when spared after damage, allow visual awareness without consciousness.

10 Predictions (continued) Prediction 2: New Qualia. Build three sets of new color receptors in the eyes, sensitive to the infrared spectrum, and also build wiring to supply the system with new concept-atoms triggered by these receptors. This should give rise to three new color qualia. Then swap connections on the old colors and the new IR pathways, at a point that is just outside the scope of the analysis mechanism. The prediction is that the two sets of color qualia will be swapped so that the new qualia will be associated with the old visible-light colors. This will only occur if the swap happens beyond the analysis mechanism. If we subsequently remove all traces of the new IR pathways outside the foreground (again, beyond the reach of the analysis mechanism), then the old color qualia will disappear and all that will remain will be the new qualia. (There are further predictions in the paper)

11 AGI Conclusion Any computer designed in such a way that it had the same problems with its analysis mechanism as we humans do (arguably, any fully sentient computer) would experience consciousness. We could never “prove” this statement the way that we prove things about other concepts, but that is part of what it means to say that consciousness concepts have a special status—they are real, but beyond analysis—and the only way to be consistent about our interpretation of these phenomena is to say that, insofar as we can say anything at all about consciousness, we can be sure that the right kind of AGI would also experience subjective consciousness.


Download ppt "Consciousness in Human and Machine A Theory (with falsifiable predictions) Richard Loosemore."

Similar presentations


Ads by Google