Presentation on theme: "Clarifying what Functionalism is…"— Presentation transcript:
1Clarifying what Functionalism is… This means they are “multiply realisable”= able to be manifested in various systems, even perhaps computers,so long as the system performs the appropriate functions(Wikipedia definition)Central systemsMotor systemsSensory systemsSightHearingTasteSmellTouchBalanceHeat/cold…CategorisationAttentionMemoryKnowledge representationNumerical cognitionThinkingLearningLanguageVoiceLimbsFingersHead…Functionalism says we can study the information processing tasks (and algorithms for doing them) independently from the physical levelMotor outputsensoryinputPhysical Implementation
2Clarifying what Functionalism is… Central systemsMotor systemsSensory systemsWhat about Brooks? (remember tutorial article)Is he a functionalist?Yes! Otherwise he wouldn’t be trying to use computers to implement the processing in his robots.He would instead be trying to use some organic system,as a non-functionalist would believe that the processing happening in an animals’ neurons could not be performed by a computerSightHearingTasteSmellTouchBalanceHeat/cold…CategorisationAttentionMemoryKnowledge representationNumerical cognitionThinkingLearningLanguageVoiceLimbsFingersHead…Motor outputsensoryinputPhysical Implementation
3Clarifying what Functionalism is… So what was it Brooks was saying about the real world?Central systemsMotor systemsSensory systemsSightHearingTasteSmellTouchBalanceHeat/cold…CategorisationAttentionMemoryKnowledge representationNumerical cognitionThinkingLearningLanguageVoiceLimbsFingersHead…He said this side needs to be connected to the real world, not a simulatione.g. digital camera getting data from real world, with noise, and messy lighting conditions, etc.Motor outputsensoryinputPhysical Implementation
4Clarifying what Functionalism is… So what was it Brooks was saying about the real world?Central systemsMotor systemsSensory systemsSightHearingTasteSmellTouchBalanceHeat/cold…CategorisationAttentionMemoryKnowledge representationNumerical cognitionThinkingLearningLanguageVoiceLimbsFingersHead…He said this side needs to be connected to the real world, not a simulatione.g. wheels on the robot, which might slip on the ground or stick on the carpet, etc.i.e. messyMotor outputsensoryinputPhysical Implementation
5Clarifying what Functionalism is… So what was it Brooks was saying about the real world?Central systemsMotor systemsSensory systemsSightHearingTasteSmellTouchBalanceHeat/cold…CategorisationAttentionMemoryKnowledge representationNumerical cognitionThinkingLearningLanguageVoiceLimbsFingersHead…He didn’t say he had any problem with the algorithms being implemented on a computerMotor outputsensoryinputPhysical Implementation
6The Physical Symbol System Some sort of Physical Symbol System seems to be needed to explain human abilitiesHumans are “programmable”We can take on new information and instructionsWe can learn to follow new procedurese.g. a new mathematical procedureHuman mind is very flexible…But not true of other animals, even apesAnimals have special solutions for specific tasksFrog prey locationHuman flexible Physical Symbol System must have evolved from animals’ processing systemsDetails of physical implementation are unknownLet’s stick with Physical Symbol System for now…See can we flesh out more details
7The Language of Thought What is the language we “think in”?Is it our natural language, e.g. English, or mentalese?Some introspective arguments against natural languageWord is “on the tip of my tongue”, but can’t find itDifficult to define concepts in natural language, e.g. dog, angerWe have a feeling of knowing something, but hard to translate to languageSome observable evidence against natural languageChildren reason with concepts before they can speakWe often remember gist of what is said, not exact wordsCognitive science experiment: (recall after 20 second delay)He sent a letter about it to Galileo, the great Italian Scientist.He sent Galileo, the great Italian Scientist, a letter about it.A letter about it was sent to Galileo, the great Italian Scientist.Galileo, the great Italian Scientist, sent him a letter about it.
8Represent as Propositions Just like the logic we had for AIlikes(john,mary)isarelationlikessubjectrelationobjectaapplesubjectobjectgivesmaryjohnrelationsubjectobjectjohnarecipientmary
9Evidence for Propositions A cognitive Science experiment (Kintsch and Glass)Consider two different sentences, but both with three “content words”The settler built the cabin by hand.One 3-place relationThe crowded passengers squirmed uncomfortably.Three 1-place relationsSubjects recalled first sentence betterSuggests it was simpler in the representation(Cognitive Science involves a fair bit of guessing!)
10Associative NetworksIdea: put together the bits of the propositions that are similarlikesisamaryjohnaapplegives
11Associative NetworksIdea: put together the bits of the propositions that are similarEach node has some level of activationActivation spreads in parallel to connecting nodesActivation fades rapidly with timeA node’s total activation is divided among its linksThese rules make sure it doesn’t spread everywhereNodes and links can have different capacitiesImportant ones are activated very oftenHave higher capacityThese ideas seem to match our intuition from introspectionOne thought links to another connected one
12Associative NetworksCognitive Science experiment (McKoon and Ratcliff)Made short paragraphs of connected propositionsSubjects viewed 2 paragraphs for a short timeSubjects were shown 36 test words in sequence and asked if those words occurred in one of the storiesFor some of the 36 words, they were preceded by a word from same storyFor some of the 36 words, they were preceded by a word from other storyWord from same story helped them remember…Suggests it is because they were linked in a networkThey also showed recall was better if closer in the network…Suggests activation weakens as it spreads
13Schemas Propositional networks can represent specific knowledge John gave the apple to Mary…but what about general knowledge, or commonsense?Apple is edible fruitGrows on a treeRoundish shapeOften red when ripe…Could augment our proposition networkAdd more propositions to the node for appleApple then becomes a conceptThe connections to apple are a schema for the conceptWhat about more advanced concepts/schemas like a trip to a restaurant?...
14Scripts Elements of a script… Identifying name or theme Typical roles Eating in a restaurantVisiting the doctorTypical rolesCustomerWaiterCookEntry conditionsCustomer hungry, has money
15Scripts Sequence of goal directed scenes EnterGet a tableOrderEatPay billLeaveSequence of actions within sceneGet menuRead menuDecide orderGive order to waiter
16Scripts How to represent a script? Could use proposition network for all the parts… but maybe whole script should be a unitIntrospection suggests that it is activated as a unit without interference from associated propositionsExperimental evidence (Bower, Black, Turner 1979)…Got subjects to read a short storyStory followed a script, but didn’t fill in all detailsThey were then presented various sentencesSome from story, and some notSome trick sentences were included:Not from the story, but part of the scriptSubjects were asked to rate 1(sure I didn’t read it) -7(sure I did read it)Subjects had a tendency to think they read the trick sentencesSuggests that they activate the script and fill in the blanks in memory
17…Starting to get a Model of the Mind Propositional-schema representations stored in long-term memoryAssociative activation used to retrieve relevant memories…but many details unspecifiedNeed more machinery to account forAssess retrieved information, see does it relate to current goalsDecompose goals into subgoalsDraw conclusions, make decisions, solve problemsMore importantly:How to get new propositions and schemas into memorySchemas are often generalised from examples, not taughtWhat about working memory?
18Working Memory Most long-term memory not “active” most of the time Just keep a few things in working memory for current processingVery limited: try multiplying 3-digit numbers without paperWorking memory holds 3-4 chunks at a timeWhy so limited? (it seems useful to have more nowadays)Maybe complex circuitry requiredMaybe costly in energyMaybe tasks were less complex in environment of early humansOr maybe more working memory would cause too many clashes, or be too hard to manageHowever limits can be overcome by skill formationNote also: limit of 3-4 does not mean other “propositions” inactiveCould be a lot more going on subconsciously
19Skill Acquisition With a lot of practice we can “automate” many tasks We distinguish this from “controlled processing” – using working memoryOnce automated:Takes little attention or working memory (these are “freed up”)Hard not to perform the task – cannot control it wellMost advanced skills use a combinationAutomatic processes under direction of controlled processes, to meet goalsExamples: martial arts expert, or musician
20Is Skill Acquisition Separate? Evidence from Neuropsychology:People with severe “anterograde amnesia”Cannot learn new factsi.e. can’t get them into long-term propositional memory…but can learn new skillsExample:Can learn to solve towers of Hanoi with practiceBut cannot remember any occasion when they practised itSuggests that a different part of the brain handles eachSkill may reside in visual and motor systems, rather than central systemsMaybe because of evolution:Animals often have good skill acquisitionMaybe humans evolved a specific new module for high level functions
21Mental Images Sometimes we seem to evoke visual images in “mind’s eye” Subjective experience suggests visual image is separate from propositions…but need experimental evidenceIn imagining a scene:Example: search a box of blocks for 3cm cube with two adjacent blue sidesProperties are added to a descriptionBut not so many properties as would be present in a real visual sceneSupport, illumination, shading, shadows on near surfacesImage does not include properties not available to visual perceptionOther side of cubeIntuition suggests that “mind’s eye” mimics visual perceptionMaybe it uses the same hardware?Would mean that “central system” sends information to vision system
22Mental Images Hypothesis: there is a human “visual buffer” Short-term memory structureUsed in both visual perception and “mind’s eye”Special features/procedures:Can load it, refresh it, perform transformationsHas a centre with high resolutionFocus of attention can be moved aroundAssuming it exists… what good is it?Allows you to pull things out of your visual long term memoryUse it to build a scene, with all spatial details filled inUseful to plan a route, or a rearrangement of objectsExperiment: how many edges on a cube?(Assuming answer is not in long term memory)
23Experiments to show Mental Images Test a special procedure: mental rotation
24Experiments to show Mental Images Time taken depended on how much rotation was neededSuggests that we really rotate in the “visual buffer”
26Experiments to show Mental Images However… just because we rotate stuff doesn’t necessarily mean that we do it in the “visual buffer”…Need more evidencePET brain scans have shown that the “occipital cortex” is used“occipital cortex” is known to be involved in visual processing
27So far… The “Symbolic” Approach to explaining cognition an alternative… the “Connectionist” approach…
28Connectionist Approach What is connectionism?Concepts are not stored as clean “propositions”They are spread throughout a large network“Apple” activates thousands of microfeaturesActivation of apple depends on context, no single dedicated unitNeural plausibilityGraceful degradation, unlike logical representationsCognitive plausibilityCould explain entire system, rather than some task in central system (symbolic accounts can be quite fragmented)Could explain the “pattern matching” that seems to happen everywhere (for example in retrieval of memories)Explain how human concepts/categories do not have clear cut definitionsCertain attributes increase likelihood (ANN handles this well)But not hard and fast rulesExplains how concepts are learnedAdjust weights with experience
29Another Perspective on Cognitive Science / AI We have seen multiple models for the mind, and each has an “AI version” tooPropositions AI’s logic statementsScripts AI’s case based reasoningMental images AI: some work, but not muchConnectionist models AI’s neural networksThis gives us another perspective on Cognitive Science / AIBoth are working in different directionsAI person starts with a computer and saysHow can I make this do something that a mind does?May take some inspiration from what/how a mind does itCognitive Science person starts with a mind and saysHow can I explain something this does, using the “computer metaphor”?May take some inspiration from how computers can do itEspecially from how AI people have shown certain things can be done
30Another Perspective on Cognitive Science / AI We have seen multiple models for the mind, and each has an “AI version” tooPropositions AI’s logic statementsScripts AI’s case based reasoningMental images AI: some work, but not muchConnectionist models AI’s neural networksThis gives us another perspective on Cognitive Science / AIBoth are working in different directionsAI person starts with a computer and saysHow can I make this do something that a mind does?May take some inspiration from what/how a mind does itCognitive Science person starts with a mind and saysHow can I explain something this does, using the “computer metaphor”?May take some inspiration from how computers can do itEspecially from how AI people have shown certain things can be doneWhich model is correct?…possibly… all of themi.e. all working togethere.g. we have seen that logic could be implemented on top of Neurons(need not be in “clean” symbolic way)This would give opportunity for logical reasoning, while still having “scruffy” intuitions going on in the background.