Presentation is loading. Please wait.

Presentation is loading. Please wait.

Turing:CCS/575S10.ppt version: 20100315.

Similar presentations


Presentation on theme: "Turing:CCS/575S10.ppt version: 20100315."— Presentation transcript:

1 turing:CCS/575S10.ppt version:

2 Artificial Intelligence, Natural Language, the Chinese Room, & Reading for Understanding
William J. Rapaport Department of Computer Science & Engineering, Department of Philosophy, Department of Linguistics, and Center for Cognitive Science

3 What Is AI? 2 Contrasting Views
“The science of making machines do things that would require intelligence if done by humans.” Marvin Minsky Using humans to tell us how to program computers e.g., play chess, solve calculus problems, … e.g., see, use language, … “classical AI”

4 What Is AI? (continued) “The use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular.” Margaret Boden Using computers to tell us something about humans. Cognitive science

5 What Is AI? (cont’d) My view: AI is …

6 My view: AI “computational cognition” asks: 2 further questions:
What Is AI? (cont’d) My view: AI “computational cognition” asks: How much of cognition is computable? 2 further questions: What is computation? Turing machine How will we know if (some aspect of) cognition is computable? Turing test

7 Alan Turing British, 1912-1954 “invented” the idea of computation
Turing “machine” cracked the Nazi “Enigma” code during WW II devised test for AI: Turing “test”

8 What Is a Computer?

9

10 Turing Machine Everything that is (humanly) computable can be computed using: a machine with: an “infinite” tape divided into squares (with ‘0’ on each) a movable read-write head a programming language with only: 2 nouns: 0, 1 2 verbs: move(left or right), print(0 or 1) 3 grammar rules: sequence (begin do this; then do that end) where “this”/“that” = print, move, or any grammatical instruction selection (if current square=0 then do this else do that) repetition (while current square=0 [or: current square=1] do this) Not everything can be computed! “halting problem”: can’t algorithmically detect infinite loops can cognition be computed?

11 (How) Can Computers Think?
(Human) cognition is computable (?) (Human) cognitive states & processes can be expressed as algorithms They can be implemented on non-human computers

12 How Computers Can Think (cont’d)
Are computers executing such cognitive algorithms merely simulating cognitive states & processes? Or are they actually exhibiting them? Do such computers think? Answer: Turing’s Test Objection: Searle’s Chinese-Room Argument My reply: Computers can understand just by manipulating symbols (like a Turing machine)

13 The Imitation Game MAN WOMAN INTERROGATOR “I’m the woman”

14 The Turing Test The Imitation Game
COMPUTER MAN “I’m the woman” WOMAN “I’m the woman” INTERROGATOR

15 The Turing Test #2 The Imitation Game
COMPUTER MAN MAN “I’m the woman” WOMAN “I’m the woman” man man INTERROGATOR

16 The Turing Test #3 The Imitation Game
COMPUTER HUMAN MAN “I’m the woman” WOMAN “I’m the woman” human human INTERROGATOR

17 The Turing Test Questions Responses I H? / C? “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” - Turing 1950

18 Thinking vs. “Thinking”
© The New Yorker, 5 July 1993

19 Thinking vs. “Thinking”, cont’d.
Cartoon works because: You don’t know whom you’re communicating with via computer Nevertheless, we assume we are talking to a human I.e., entity with human cognitive capacities = Turing’s point, namely: Argument from Analogy: Solution to Problem of Other Minds = I know I think; how do I know you do? You are otherwise like me  (probably) you are like me w.r.t. thinking

20 Thinking vs. “Thinking”, cont’d.
BUT: A dog that passes as a human isn’t a human John Searle: A computer that simulates thinking (passes a Turing test) isn’t really thinking. The Chinese-Room Argument

21 Thinking vs. “Thinking”, cont’d.
pre-1950: computers were (only) human 2000: computers are (only) machines General educated opinion: Computers are viewed not as implementing-devices but in functional, I/O terms Ditto for ‘think’ (more later) BUT: it’s not really thinking (?)

22 The Chinese Room © MacroVU Press

23

24 The Chinese-Room Argument
It’s possible to pass TT, yet not (really) think story + questions H I (in Chinese) (who can’t (native Chinese understand Ch.) speaker) responses (Eng.) program (in fluent Chinese) for manipulating [Ch.] “squiggles”

25 Searle’s Chinese-Room Argument
(1) Computer programs just manipulate symbols (2) Understanding has to do with meaning (3) Symbol manipulation alone is not sufficient for meaning (4)  No computer program can understand ¬ (3): You can understand just by manipulating symbols!

26 Contextual Vocabulary Acquisition
Could Searle-in-the-room figure out a meaning for an unknown squiggle? Yes! Same way you can figure out a meaning for an unfamiliar word from context

27 What Does ‘Brachet’ Mean
What Does ‘Brachet’ Mean? (From Malory’s Morte D’Arthur [page # in brackets]) 1. There came a white hart running into the hall with a white brachet next to him, and thirty couples of black hounds came running after them. [66] As the hart went by the sideboard, the white brachet bit him. [66] The knight arose, took up the brachet and rode away with the brachet. [66] A lady came in and cried aloud to King Arthur, “Sire, the brachet is mine”. [66] There was the white brachet which bayed at him fast. [72] The hart lay dead; a brachet was biting on his throat, and other hounds came behind. [86] Predefine ‘hart’, ‘sideboard’, ‘bay’. Sentence 3: if no one suggests that brachet is small, ask how big it is.

28 Reader’s Prior Knowledge
Where is the “context”? Reader’s Prior Knowledge Text PK1 PK2 PK3 PK4

29 Reader’s Prior Knowledge
Text T1 PK1 PK2 PK3 PK4

30 Internalized (Co-)Text integrated with Reader’s Prior Knowledge
PK1 PK2 PK3 PK4 internalization I(T1)

31 Belief-Revised Internalized Text integrated with Reader’s Prior Knowledge
internalization T1 PK1 PK2 PK3 PK4 I(T1) inference P5

32 B-R Integrated KB Text PK1 PK2 PK3 PK4 T1 I(T1) T2 P5 I(T2) P6
internalization T1 I(T1) T2 inference P5 I(T2) P6

33 B-R Integrated KB Text T1 PK1 PK2 PK3 PK4 I(T1) T2 T3 P5 I(T2) P6
internalization I(T1) T2 inference T3 P5 I(T2) P6 I(T3)

34 B-R Integrated KB Text T1 PK1 PK2 PK3 PK4 I(T1) T2 T3 P5 I(T2) P6
internalization I(T1) T2 inference T3 P5 I(T2) P6 I(T3)

35 Reader’s Mind Text T1 PK1 PK2 PK3 PK4 P7 I(T1) T2 T3 P5 I(T2) P6 I(T3)
“Context” for CVA is the reader’s mind, not the (co-)text Reader’s Mind Text internalization T1 PK1 PK2 PK3 PK4 P7 I(T1) T2 inference T3 P5 I(T2) P6 I(T3)

36 “Words should be thought of not as having intrinsic meaning, but as providing cues to meaning.”
Jeffrey L. Elman, “On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon” (2009) “Words might be better understood as operators, entities that operate directly on mental states in what can be formally understood as a dynamical system.” Jeffrey L. Elman, “On Words and Dinosaur Bones: Where Is Meaning?” (2007)

37 Computational CVA Implemented in SNePS (Shapiro 1979; Shapiro & Rapaport 1992) Intensional, propositional, semantic-network knowledge-representation, reasoning, & acting system “intensional”: e.g., can represent fictional objects “propositional”: can represent sentences in a text “semantic network”: labeled, directed graph with nodes linked by arcs indexed by node from any node, can describe rest of network Serves as model of the reader (“Cassie”)

38 Computational CVA (cont’d)
KB: SNePS representation of reader’s prior knowledge I/P: SNePS representation of word in its co-text Processing (“simulates”/“models”/is?! reading): Uses logical inference, generalized inheritance, belief revision to reason about text integrated with reader’s prior knowledge N & V definition algorithms deductively search this “belief-revised, integrated” KB (the wide context) for slot fillers for definition frame… O/P: Definition frame slots (features): classes, structure, actions, properties, etc. fillers (values): info gleaned from context (= integrated KB) My colleagues and I have developed a computational theory of CVA, which is implemented in a propositional semantic-network knowledge-representation-and-reasoning system (SNePS; Shapiro & Rapaport 1987, 1992, 1995; Shapiro & Group 2004). Our computational system begins with a stored knowledge base containing SNePS representations of relevant PK. It takes as input SNePS representations of a passage containing an unfamiliar word. The processing begins with inferences drawn from these two sources of information. When asked to define the word, it applies definition algorithms (for nouns and for verbs; adjectives are under investigation) that deductively search the resulting network for information of the sort that might be found in a dictionary definition, outputting a definition frame (or schema) whose slots are the kinds of features that a definition might contain (e.g., class membership, properties, actions, spatio-temporal information, etc.) and whose slot-fillers contain information gleaned from the network. (Details of the underlying theory, representations, processing, inferences, belief revision, and definition algorithms are presented in Ehrlich 1995, 2004; Ehrlich & Rapaport 1997, 2004; Rapaport & Ehrlich 2000; Rapaport & Kibby 2002; Rapaport 2003.) We are investigating ways to make our system more robust, to embed it in a natural-language-processing system, and to incorporate what Sternberg and colleagues (1983ab, 1987) call “internal” context, i.e., morphological (and etymological) analysis.

39 Cassie learns what “brachet” means: Background info about:
Cassie learns what “brachet” means: Background info about: harts, animals, King Arthur, etc. No info about: brachets Input: formal-language (SNePS) version of simplified English A hart runs into King Arthur’s hall. • In the story, B12 is a hart. • In the story, B13 is a hall. • In the story, B13 is King Arthur’s. • In the story, B12 runs into B13. A white brachet is next to the hart. • In the story, B14 is a brachet. • In the story, B14 has the property “white”. • Therefore, brachets are physical objects. (deduced while reading; PK: Cassie believes that only physical objects have color)

40 --> (defineNoun "brachet") Definition of brachet:
Class Inclusions: phys obj, Possible Properties: white, Possibly Similar Items: animal, mammal, deer, horse, pony, dog, I.e., a brachet is a physical object that can be white and that might be like an animal, mammal, deer, horse, pony, or dog

41 A hart runs into King Arthur’s hall
A hart runs into King Arthur’s hall. A white brachet is next to the hart. The brachet bites the hart’s buttock. [PK: Only animals bite] --> (defineNoun "brachet") Definition of brachet: Class Inclusions: animal, Possible Actions: bite buttock, Possible Properties: white, Possibly Similar Items: mammal, pony,

42 A hart runs into King Arthur’s hall.
A white brachet is next to the hart. The brachet bites the hart’s buttock. The knight picks up the brachet. The knight carries the brachet. [PK: Only small things can be picked up/carried] --> (defineNoun "brachet") Definition of brachet: Class Inclusions: animal, Possible Actions: bite buttock, Possible Properties: small, white, Possibly Similar Items: mammal, pony,

43 A hart runs into King Arthur’s hall
A hart runs into King Arthur’s hall. A white brachet is next to the hart. The brachet bites the hart’s buttock. The knight picks up the brachet. The knight carries the brachet. The lady says that she wants the brachet. [PK: Only valuable things are wanted] --> (defineNoun "brachet") Definition of brachet: Class Inclusions: animal, Possible Actions: bite buttock, Possible Properties: valuable, small, white, Possibly Similar Items: mammal, pony,

44 A hart runs into King Arthur’s hall
A hart runs into King Arthur’s hall. A white brachet is next to the hart. The brachet bites the hart’s buttock. The knight picks up the brachet. The knight carries the brachet. The lady says that she wants the brachet. The brachet bays at Sir Tor. [PK: Only hunting dogs bay] --> (defineNoun "brachet") Definition of brachet: Class Inclusions: hound, dog, Possible Actions: bite buttock, bay, hunt, Possible Properties: valuable, small, white, I.e. A brachet is a hound (a kind of dog) that can bite, bay, and hunt, and that may be valuable, small, and white.

45 Algorithms (for Computers)
Generate initial hypothesis by “syntactic manipulation” Algebra: Solve an equation for unknown value X Syntax: “Solve” a sentence for unknown word X “A white brachet (X) is next to the hart”  X (a brachet) is something that is next to the hart and that can be white. “Define” node/word X in terms of immediately connected nodes/words Deductively search wide context to update hypothesis Look for: class membership, properties, structure, acts, agents, etc. Define” node/word X in terms of some (but not all) other connected nodes/words Output definition “frame” (schema)

46 A Computational Theory of CVA
A word does not have a unique meaning. A word does not have a “correct” meaning. Author’s intended meaning for word doesn’t need to be known by reader in order for reader to understand word in context Even familiar/well-known words can acquire new meanings in new contexts. Neologisms are usually learned only from context Every co-text can give some clue to a meaning for a word. Generate initial hypothesis via syntactic/algebraic manipulation But co-text must be integrated with reader’s prior knowledge Large co-text + large PK  more clues Lots of occurrences of word allow asymptotic approach to stable meaning hypothesis CVA is computable CVA is “open-ended”, hypothesis generation. CVA ≠ guess missing word (“cloze”);  CVA ≠ word-sense disambiguation Some words are easier to compute meanings for than others (N < V < Adj/Adv) CVA can improve general reading comprehension (through active reasoning) CVA can & should be taught in schools

47 CVA as Symbol Manipulation
We “solved” each sentence for the unknown word in terms of the rest of the text together with our background knowledge Just as we can solve an algebra problem in terms of the rest of the equation By manipulating symbols which is what computers do!

48 Computational Natural-Language Understanding
What else is needed? besides symbol manipulation

49 Mind as a Symbol-Manipulation System
To understand language, a cognitive agent must: Take discourse as input Understand ungrammatical input Make inferences & revise beliefs Make plans For speech acts To ask/answer questions To initiate conversation Understand plans Speech-act plans of interlocutor Construct user model Learn (about world, language) Have background/world/commonsense knowledge Remember What it heard, learned, inferred, revised = have a mind! All of this can be done by computers manipulating symbols!

50 Reading for Understanding Research Initiative
Institute of Education Sciences US Department of Education 5 or 6 “R&D Core Teams” 1 or 2 “Network Assessment Teams” “Manhattan Project” / “Apollo Mission” to improve (the teaching of) reading for understanding/reading comprehension $20 million over 5 years

51 A Center for Reading for Understanding
A Research and Development Core Team to Integrate Vocabulary, Writing, Reasoning, Multimodal Literacies, and Oral Discourse to Improve Reading Comprehension Principal location: UB Satellite locations: Niagara University, Penn State University Affiliated school districts: Niagara Falls CSD, Cleveland Hill USD, State College Area SD

52 Projects Writing Intensive Reading Comprehension
Jim Collins (UB/LAI) Contextual Vocabulary Acquisition Bill Rapaport (UB/CSE & CCS) Multimodal Literacies Kathleen Collins (Penn State) Virtual Interactive Environments to improve vocabulary and reading comprehension Lynn Shanahan & Mary McVee (UB/LAI) Interactively Modeled Metacognitive Thinking Rob Erwin (Niagara U) Bilingualism and Basic Cognitive Processes Janina Brutt-Griffler (UB/LAI) Experimental Design & Statistical Analysis Ariel Aloe (UB/CSEP)

53 Writing-Intensive Reading Comprehension — Jim Collins (LAI)
Based on previous successful research “Thinksheets” that guide students’ writing about a text improve their reading comprehension Thinksheets begin with questions whose answers can be found directly in the text Then gradually ask more abstract questions requiring inference. Goals: extend this research apply to more grade levels semi-automate development of thinksheets

54 Multimodal Literacies — Kathleen Collins (Penn State)
Previous research: if teachers work with professional artists to develop strategies for integrating the arts as forms of inquiry and meaning-making into the school curriculum, then more students are able to engage successfully with core curricula. Multimodal interventions & enhancements will be designed to include the following broad categories: Visual/Graphic (still photography, illustration, digital video) Oral/Interactional (scripted dialogue, interviews, guided conversations, podcasts) Tactile (movement, model building, sculpting)

55 Virtual Interactive Environments to Develop Vocabulary Knowledge — Lynn Shanahan & Mary McVee (LAI)
Develop extensible & effective computational framework for improving vocabulary knowledge through virtual-contextual learning Couple a virtual world, large projection screen, stereo sound, & positional tracking using commodity devices (e.g., Wii controllers). Virtual environment will be created using a programming toolkit developed for use in computer games and engineering simulations. NYSCEDII Rather than interact with a computer monitor, the environment will use projection technology to provide learners with the opportunity to interact with objects of actual size, increasing the immersion into the virtual world.

56 Interactively Modeled Metacognitive Thinking — Rob Erwin (NU)
Instead of teaching specific strategies for reading comprehension… (which have recently been shown to be worse than teaching textual content) … Have students develop their own strategies on an as-needed basis Then compare to strategies- & content-approaches

57 Bilingualism & Basic Cognitive Processes — Janina Brutt-Griffler (LAI)
What are the cognitive advantages of being a bilingual reader? What are the cognitive aspects of text comprehension and possible linguistic advantages observed in bilingual readers?

58 Contextual Vocabulary Acquisition — Bill Rapaport (CSE & CCS)
Previous research: A reader's understanding of a word's meaning in a context is a function of both the context (the surrounding words) and the reader's prior knowledge. Already accomplished: A procedure for successful CVA can be expressed in terms so precise that they can be programmed into a computer. That computational procedure can then be converted into a strategy teachable to human readers We propose to embed this procedure in a curricular intervention that can help readers improve both their vocabulary and their reading comprehension.

59 Our goal is not to improve vocabulary per se, but:
CVA (cont’d) Our goal is not to improve vocabulary per se, but: to improve reading comprehension by active thinking about the text with a specific goal of vocabulary enrichment in mind and to provide readers with a method that can be used independently… e.g., when they are reading on their own …to learn new vocabulary and to improve comprehension.

60 Our CVA algorithms fall into this category
CVA (cont’d) GOFAI: if we know how to explicitly teach some cognitive task to humans e.g., play chess, do calculus, prove theorems then we can explicitly program a computer to do that task pretty much as humans do it. Our CVA algorithms fall into this category Can we do the converse? Can a human reader learn vocabulary & improve reading comprehension by carrying out our algorithm?

61 Not “think like a computer”
CVA (cont’d) Not “think like a computer” I.e., rigidly, mechanically, uncreatively But: what we have learned by teaching a computer to do CVA can now help us teach human readers who need guidance in CVA.

62 CVA (cont’d) Research questions: Can a computer algorithm be translated into a successful curricular intervention? Does this computer-based curriculum improve (meaning) vocabulary? CVA students vs. “typical context-based” students vs. “no treatment” control Each read same passage with single unfamiliar word & figure out a meaning from context CVA students vs. “direct-method” control at time t Tested on meaning at time t > t

63 CVA (cont’d) Can a computer algorithm be translated into a successful curricular intervention? Does the algorithm-based curriculum improve reading comprehension? CVA students vs. “typical context-based” students vs. no-treatment students Test for reading comprehension on passages… containing unfamiliar words with no unfamiliar word

64 Other issues: CVA (cont’d) “teacher training”/professional development
oral discussion: explicit instruction on using language to reason is valuable use thinksheet as “detective”/“scientist” notebook role & nature of prior knowledge what is needed, how to identify it, how to elicit it, etc. test at different ages & in STEM vs. ELA develop software for use in classroom student can ask Cassie how she figured it out


Download ppt "Turing:CCS/575S10.ppt version: 20100315."

Similar presentations


Ads by Google