Presentation on theme: "Turing Machines and Computationalism Minds & Machines Fall 2006."— Presentation transcript:
Turing Machines and Computationalism Minds & Machines Fall 2006
Overview From Materialism to Computationalism –Behaviorism –Identity Theory –Functionalism –Computationalism Computations –What is a Computation? –Effective Computations –Turing Machines –Church-Turing Thesis –Universal Machines –The Brain as a Computer Some Objections to Computationalism –Simulations –Semantics
Behaviorism Mental properties are behavioral dispositions. To be intelligent is to behave a certain way –Do well on tests –Be able to solve problems –Give correct answers to questions –Deal with new situations –Etc. Compare: A car’s speed. –Being fast is behavioral property. Problem: What about our ‘inner mental life’; our thoughts, feelings, sensations, etc.?!
Identity Theory Mental states are physical states of the brain. To have a belief X is to have a certain brain state. Problems: –Carbon Chauvinism: Why does it have to be a carbon- based configuration of neurons? Why not using other elements or other material? –Also, what if you put the brain in a completely different kind of body or environment? In other words, isn’t the ‘meaning’ of brain states in part derived from the role that they have in the overall causal system?
Functionalism Mental states of an agent can be defined relative to an abstract causal system as implemented by that agent’s sensory apparatus, motor control, and mediating mechanisms. Functionalism can be seen as a kind of compromise between behaviorism and identity theory: –like behaviorism (and unlike identity theory) the emphasis is on the functionality of things, but –like identity theory (and unlike behaviorism) we are going to look what goes on inside of us
Multiple Realizability Functionalism allows for completely different kinds of beings to be intelligent, as the relevant abstract causal/functional organization can be implemented in various ways. Can computers be such beings?
Functionalism, Chairs, and Computers We can be functionalist about chairs: –What makes a chair a chair is not what it is made out of (indeed, you can have wooden, plastic, or metal chairs), but that you can sit in it, i.e. its functionality But, there is no way that we can program a computer so that it becomes a chair –‘chairhood’ is not a functionality that can be implemented by computer program.
Computationalism Cognition can be defined in terms of information- processing: –Perception is taking in information from the environment –Memory/Beliefs/Knowledge is storing information –Reasoning is inferring new information from existing information –Planning is using information to make decisions –Etc. Information-processing can be done through computations Therefore, cognition is computation.
Computationalism and the Brain The brain fits with computationalism: –The brain is unlike any other organ; the heart, lungs, liver, etc. all do something very much physical: they collect, filter, pump, etc. It’s all very physical. –The brain, however, is quite different: It takes in signals, and sends out signals through the nervous system. Thus, the brain does seem like an information-processor: a computer. –This would explain the neural dependency of our mind!
A Broad Thesis Computationalism simply states that cognition is some form of computation. But, there are many kinds of computation. What are the kinds of computations that underlie cognition? What kind of computer is the brain? What is the space of computation? What exactly is computation?
Formal Logic H B HAHA ~A ~H B 2, 3 MT A. 5. 4. 3. 2. 1. 1, 4 DS The housemaid or the butler did it If the housemaid did it, the alarm would have gone off The alarm did not go off … therefore … The butler did it!
Algorithms An algorithm is a systematic, step-by-step procedure: –Steps: Algorithms take discrete steps –Precision: Each step is precisely defined –Systematicity: After each step it is clear which step to take next Examples: –Cookbook recipe –Filling out tax forms (ok, maybe not) –Long division
Computations Computations are where the ideas of formal logic and algorithms come together. A computation is a symbol-manipulation algorithm. Example: long division. Not every algorithm is a computation –Example: furniture assembly instructions
Computers A ‘computer’ is something that computes, i.e. something that performs a computation, i.e. something that follows a systematic procedure to transform input symbol strings into output symbol strings. Notice that according to this definition, humans can be computers too in the sense that they can follow that systematic procedure. That is, when we do long division on paper, we are computing, and thereby would be a computer. Indeed, some 60 years ago, a ‘computer’ or ‘computist’ was understood to be a human being! It was only by mechanizing these computations that we obtained computers as we now know them.
The Scope and Limits of Effective Computation I An algorithm or procedure that we humans are able to follow or execute is called ‘effective’. In 1936, Turing wrote a paper in which he explored the scope and limits of effective computation. Turing tried to find the basic elements (the atomic components) of such a process.
The Scope and Limits of Effective Computation II Take the example of multiplication: we make marks on any place on the paper, depending on what other marks there already are, and on what ‘stage’ in the algorithm we are (we can be in the process of multiplying two digits, adding a bunch of digits, carrying over). So, when going through an algorithm we go through a series of stages or states that indicate what we should do next (we should multiply two digits, we should write a digit, we should carry over a digit, we should add digits, etc).
The Scope and Limits of Effective Computation III The stages we are in vary widely between the different algorithms we use to solve different problems. However, no matter how we characterize these states, what they ultimately come down to is that they indicate what symbols to write based on what symbols there are. Hence, all we should be able to do is to be able to discriminate between different states, but what we call them is completely irrelevant. Moreover, although an algorithm can have any number of stages defined, since we want an answer after a finite number of steps, there can only be a finite number of such states. One could also try and argue that we are cognitively only able to discriminate between, or even simply define, a finite number of states since our memory is limited. Thus, again, there can only be a finite number of states.
The Scope and Limits of Effective Computation IV Next, Turing reasoned that while one can write as many symbols as one wants at any location on the paper, one can only write one symbol at a a time, and symbols have a discrete location on the paper. Therefore, at any point in time the number of symbols on the paper is finite, hence we can number them, and hence we should be able to do whatever we did before by writing the symbols in one big long string of symbols, possibly using other symbols to indicate relationships between the original symbols, and adding symbols to the left or right as needed.
The Scope and Limits of Effective Computation V Moreover, to get to some location in this string (whether to read or write a symbol), we just need to be able to go back and forth, one symbol at a time, along this one big symbol string. We can add a few states to indicate that we are in the process of doing so, so this should pose no restrictions on what we would be able to do. Finally, while the marks can be arbitrary, they can only have a finite size, and hence there can only be finitely many symbols, or else there would have to be two symbols that are so much alike that we can no longer perceptually discriminate between them.
The Scope and Limits of Effective Computation VI Turing thus obtained the following basic components of effective computation: –A finite set of states –A finite set of symbols –One big symbol string that can be added to on either end –An ability to move along this symbol string (to go left or right) –An ability to read a symbol –An ability to write a symbol
Turing-Machines and Computationalism The claim of computationalism is not that our mind is implemented by a Turing-machine Again, Turing-computation is only one form of computation However, the theory of Turing-machines and Turing-computability can be used to argue for computationalism –Universal Turing Machines –0’s and 1’s
The Church-Turing Thesis Many definitions have been proposed to capture the notion of an ‘effective computation’ other than Turing-Machines. It turns out that all proposed definitions are equivalent in the sense that whatever one is able to compute using one computational method, one is able to compute with any of these other methods as well. The Church-Turing thesis states that Turing-machines capture the notion of effective computation: whatever is effectively computable, Turing-machines can compute. The Church-Turing thesis shows the amazing computational power of Turing machines. For example, Turing machines can compute what your laptop computes.
Universal Turing Machines One of Turing’s great achievements was his finding that one can make a Universal Turing Machine, which is a Turing Machine U that can simulate the behavior of any Turing Machine M by giving a description of that machine M and the input I that M would work on to machine U. This led to the notion of stored programs (programs as part of the data), and thus to programmable, general-purpose, computers. Our mind seems to be ‘programmable’ and ‘multi- purpose’ as well: we can learn to do new tasks through training and experience.
Symbols and Representations The symbols that computations manipulate are representations of things. By manipulating those representations we come to know something about the things that those representations represent. Thus, things become computable: ‘I can compute the ratio of 2 numbers’. It doesn’t matter what symbols we use! We can use ‘4’ to represent the number four, but we can also use ‘IV’ or ‘glfop&^Q^GH)!#@’ or ‘5’ or ‘Bram’ Representations do have an effect on the nature of the program that is needed to do the ‘right’ thing (now you need rules for a ‘4’ instead of a ‘IV’), and also on the simplicity of the program (I have always wondered how the Romans did long division!).
Syntactic and Semantic Computation 2+2=?4! 110111111 f M Syntactic Computation Semantic Computation DecodingEncoding
0’s and 1’s An important result from computability theory is that all effective computations can be performed through the manipulation of bitstrings (strings of 0’s and 1’s) alone. You do need lots of these 0’s and 1’s! But this is exactly how the modern ‘digital computer’ does things. That is, at the machine level, it’s all 0’s and 1’s.
Physical Dichotomies The 0’s and 1’s are just abstractions though; they need to be physically implemented. Thus, you need some kind of physical dichotomy, e.g. hole in punch card or not, voltage high or low, quantum spin up or down, penny on piece of toilet paper or not, etc.
Microsoft and Smuckers Microsoft is actively working to patent 0 and 1. Smuckers has a patent on peanut butter and jelly sandwiches. One of the previous two claims is true.
Causal Topology A physical system implements a computer program if and only if that system implements a certain causal topology. This topology is highly abstract. As long as you keep the functionality of the parts, and the connections between the parts, the same, you can: –Move parts –Stretch parts –Replace parts This is why there can be mechanical computers, electronic computers, DNA computers, optical computers, and quantum computers!
Computationalism and the Brain, Part II Again, the brain fits with these results: –One can obtain powerful information- processing capacities using very simple resources. –Indeed, early views on the brain supposed that neurons firing or not would constitute 0’s and 1’s. –Also, there are cases of people with abnormal brains who have normal cognitive abilities; their brains process information in different parts.
Objection to Computationalism: Simulations A computer simulation of a hurricane is just that: a simulation. It isn’t a real hurricane! Similarly, simulating what a brain is doing is just that: a simulation of a brain, and not a real brain.
Response to the Simulation Objection Well, there are two notions of ‘simulation’: –‘Computation’: a computer can simulate a hurricane in that we are able to use a computer to compute what the states are that a hurricane goes through. These states are described or displayed in some way or other, but there is no mapping between the states that that computer goes through and that the hurricane goes through. Similarly, computing the actions of a brain does indeed merely give us a description of the brain’s functioning. –‘Emulation’: However simulations in the above sense have nothing to do with the claim of computationalism which is about computations that do have the same functional organization as the brain, i.e. that emulate the brain!
Objection: The Turing Limit There are certain problems that are not Turing- computable, and therefore (by the Church-Turing Thesis) not effectively computable either. Example: The Halting Problem: deciding whether some Turing-machine will or will not halt for some input. This means that there is a non-trivial limit (called the Turing Limit) to what can be effectively computed. Moreover, some people think that they have evidence that humans figure things out beyond the Turing Limit.
Hypercomputation Some have proposed mathematical models of computation that go beyond the Turing Limit. This kind of computation is called hyper-computation. Hyper-computations are not effective, as we humans cannot consciously perform these. Indeed, no one knows how to build a hyper- computer. But that does not mean that they can’t be physically implemented. Maybe certain aspects of human cognition rely on hyper-computation? That would certainly be a blow to cognitive science and AI, that have traditionally relied on computations below the Turing Limit.
Zeus Machines One kind of hypercomputer that has been suggested is the Zeus Machine (or Accelerated Turing Machine). The Zeus machine performs every operation in half the time as its previous operation. So, if it takes 1 second to perform its first operation, then after 2 seconds it will have performed … an infinite number of operations! Using this power one can figure out problems above the Turing Limit.
Are Zeus Machines Logically Possible? Unfortunately, Zeus machines seem to be logically impossible It is suggested that a Zeus machine will have completed an infinite number of operations after some amount of time. However, going through such an ‘infinite number of operations’ is something that, by definition of infinity, has no end. So how can that ever be completed?!
Response: Where is the Evidence? It is not clear that humans do figure things out beyond the Turing Limit. No human, for example, has ever solved the Halting Problem. In fact, the knife cuts on both sides: –if human cognition is Turing-computable, then there will be limits to what humans can figure out
Objection: How can you get Semantics from Syntax? You have been using the word ‘computation’ in two different ways: –Syntactic computation: the manipulation of symbols in accordance to some algorithm –Semantic computation: the use of a syntactic computation in order to figure something out So, syntactic computations can only become meaningful information-processing (semantic computation) if it is interpreted by some cognitive agent. In short, then, how do you get semantics from syntax?
Response Well, good question. No one really knows how semantics, understanding, and intentionality comes into play. Indeed, if an agent’s cognition comes about through semantic computation, then one needs to postulate a cognitive agent to perform the interpretation, which seems to lead to an infinite regress, so this is a real problem.