Presentation on theme: "Intelligence Artificial Intelligence Ian Gent Attacks on AI."— Presentation transcript:
Intelligence Artificial Intelligence Ian Gent firstname.lastname@example.org Attacks on AI
Intelligence Artificial Intelligence Part I :Lucas: Minds, Machines & G del Part II: Searle: Minds, Brains & Programs Part III: Weizenbaum:Computer Power & Human Reason Attacks on AI
3 Strong AI and Weak AI zPhrases coined by John Searle zWeak AI takes the view that … yComputers are powerful tools xto do things that humans otherwise do xand to study the nature of minds in general zStrong AI takes the view that yA computer with the right software is a mind zLucas and Searle attack Strong AI zWeizenbaum attacks all AI
4 Minds, Machines and Gödel zTitle of article by J.R. Lucas yreprinted in `Minds and Machines’, ed. A.R. Anderson xPrentice Hall, 1964 zArgument is based on the following premises y1. Gödel’s theorem shows that any consistent and powerful formal system must be limited xthere must be true statements in cannot prove y2. Computers are formal systems y3. Minds have no limit on their abilities
5 Minds, Machines and Gödel zPremises y1. Gödel’s theorem shows that any consistent and powerful formal system must be limited xthere must be true statements in cannot prove y2. Computers are formal systems y3. Minds have no limit on their abilities zConclusion yComputers cannot have minds zShould Strong AI give up and go home? yCertainly Gödel’s theorem applies to computers
6 Refuting Lucas: (1) zTuring decisively refuted Lucas yin his article `Computing Machinery and Intelligence’ zThe defeat is on two counts y1. “Although it is established that there are limitations to the powers of any particular machine, it has only been stated without any sort of proof, that no such limitations apply to the human intellect” yI.e. are we sure humans can prove all true theorems? zMaybe humans are unlimited? What then?
7 Refuting Lucas: (2) zTuring’s second point is decisive y“We too often give wrong answers ourselves to be justified in being very pleased at such evidence of fallibility on the part of machines.” yGödel’s theorem applies only to consistent formal systems yHumans often utter untrue statements yWe might be unlimited formal systems which make errors zThe two arguments show that Lucas’s attack fails yStrong AI’ers don’t need to worry about Gödel’s theorem zThe ‘Chinese Room’ attack is much stronger
8 The Chinese Room zJohn Searle y“Minds, Brains, and Programs” yThe Behavioral and Brain Sciences, vol 3, 1980 zSearle attacked with the ‘Chinese Room’ argument zRemember, Searle is attacking Strong AI yhe attacks claims that, e.g. story understanding programs xliterally understand stories xexplain human understanding of stories
9 The Chinese Room Thought Experiment zA thought experiment zAimed at showing conscious computers are impossible zBy analogy with an obviously ridiculous situation zJohn Searle does not understand Chinese zImagine a set up in which he can simulate a Chinese speaker
10 Locked in a Chinese Room zJohn Searle is locked in solitary confinement zHe is given lots of … yblank paper, pens, and time ylots of Chinese symbols on bits of paper yan in tray and out tray xfor receiving and sending Chinese messages yrule books written in English (which he does understand) xtelling how to take paper from in-tray, process it, and put new bit of paper with symbols on it in out-tray
11 Outside the Chinese Room zUnknown to Searle, his jailers … yregard the in-tray as containing input from a Chinese player of Turing’s imitation game ythe rule books as containing an AI program yregard the out-tray as containing responses zSuppose Searle passes the Turing Test in Chinese zBut Searle still does not understand Chinese zBy analogy, even a computer program that passes the Turing test does not truly “understand”
12 Objections and Responses zLike Turing, Searle considers various objections z1. The Systems Reply z“The whole system (inc. books, paper) understands” ySearle: learn all rules and do calculations all in head xstill Searle (I.e. whole system) does not understand z2. The Robot Reply z“Put a computer inside a robot with camera, sensors etc” ySearle: put a radio link to the room inside the robot xstill Searle (robot’s brain) does not understand
13 Objections and Responses z3. The Brain Simulator Reply z“Make computer simulate neurons, not AI programs” yIn passing: Searle notes this is a strange reply xseems to abandon AI after all! ySearle: there is no link between mental states and their ability to affect states of the world y“As long as it simulates only the formal structure of a sequence of neuron firings … it won’t have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states” x“intentional states”: that feature of mental states by which they are directed at states of affairs in the world
14 Is Searle right? zAlmost universally disagreed with by AI writers zNo 100% rock solid refutation like Turing’s of Lucas zSome points to ponder yIs a thought experiment valid? xE.g. 500 MHz x 1 hour >> Searle processor x 1 lifetime yIf machines lack intentionality, where do humans get it? yIs AI a new kind of duality? xOld: mind separate from body (villified by AI people) xNew: thought separate from brain yDoes it matter?
15 Joseph Weizenbaum z“Computer Power and Human Reason” yPenguin, 1976 (second edition 1985) zWeizenbaum wrote ELIZA in mid 1960’s zShocked by reactions to such a simple program ypeople wanted private conversations ytherapists suggested use of automated therapy programs ypeople believed ELIZA solved natural language understanding
16 Computer Power and Human Reason zWeizenbaum does not attack possibility of AI use zAttacks the use of AI programs in some situations zattacks the “imperialism of instrumental reason” ye.g. story about introduction of landmines xScientists tried to stop carpet bombing in Vietnam xbut did not feel it enough to oppose on moral grounds xso suggested an alternative to bombing xnamely widespread use of landmines
17 What’s the problem? z“The question I am trying to pursue here is: y‘What human objectives and purposes may not be appropriately delegated to a computer?’ ” zHe claims that the Artificial Intelligentsia claim ythere is no such domain zBut knowledge of the emotional impact of touching another person’s hand “involves having a hand at the very least” zShould machines without such knowledge be allowed power over us?
18 What computers shouldn’t do zWeizenbaum argues that many decisions should not be handled by computer ze.g. law cases, psychotherapy, battlefield planning zEspecially because large AI programs are ‘incomprehensible’ ye.g. you may know how a Deep Blue works xbut not the reason for a particular move vs Kasparov zImperialism of instrumental reason must be avoided yespecially by teachers of computer science!
19 And finally … zWeizenbaum gives an example of y“a computer application that ought to be avoided” zWreck a nice beach zRecognise speech ysystem might be prohibitively expensive xe.g. too much for a large hospital ymight be used by Navy to control ships by human voice ylistening machines for monitoring phones. AI is out there… zSorry Joe, AI is out there…