An Artificial Neural Network for Multi-Level Interleaved and Creative Serial Order Cognitive Behavior Steve Donaldson Department of Mathematics and Computer.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Level 1 Recall Recall of a fact, information, or procedure. Level 2 Skill/Concept Use information or conceptual knowledge, two or more steps, etc. Level.
HOW DOES HUMAN-LIKE KNOWLEDGE COME INTO BEING IN ARTIFICIAL ASSOCIATIVE SYSTEMS? AGH UNIVERSITY OF SCIENCE AND TECHNOLOGY Faculty of Electrical Engineering,
An Artificial Neural Network for Multi-Level Interleaved and Creative Serial Order Cognitive Behavior Steve Donaldson Department of Mathematics and Computer.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Understanding Depth 0f knowledge
Threaded Cognition: An Integrated Theory of Concurrent Multitasking
1 Lecture 35 Brief Introduction to Main AI Areas (cont’d) Overview  Lecture Objective: Present the General Ideas on the AI Branches Below  Introduction.
Chapter 10 Artificial Intelligence © 2007 Pearson Addison-Wesley. All rights reserved.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Seminar /workshop on cognitive attainment ppt Dr Charles C. Chan 28 Sept 2001 Dr Charles C. Chan 28 Sept 2001 Assessing APSS Students Learning.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Principles of High Quality Assessment
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Computational Thinking Related Efforts. CS Principles – Big Ideas  Computing is a creative human activity that engenders innovation and promotes exploration.
Chapter 11: Artificial Intelligence
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Jochen Triesch, UC San Diego, 1 Short-term and Long-term Memory Motivation: very simple circuits can store patterns of.
Chapter 10 Artificial Intelligence. © 2005 Pearson Addison-Wesley. All rights reserved 10-2 Chapter 10: Artificial Intelligence 10.1 Intelligence and.
Purpose of study A high-quality computing education equips pupils to use computational thinking and creativity to understand and change the world. Computing.
Welcome to the Data Warehouse HOME HELP COGNITIVE LEVELS Assessments COGNITIVE LEVELS.
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
© 2009 McGraw-Hill Higher Education. All rights reserved. CHAPTER 8 The Information-Processing Approach.
NEURAL NETWORKS FOR DATA MINING
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Comp 15 - Usability and Human Factors
Assistive Technology in the Classroom Setting Rebecca Puckett CAE6100 – GQ1 (24494) Dec. 7, 2009.
Copyright © 2010, Pearson Education Inc., All rights reserved.  Prepared by Katherine E. L. Norris, Ed.D.  West Chester University of Pennsylvania This.
Human Abilities 2 How do people think? 1. Agenda Memory Cognitive Processes – Implications Recap 2.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Framework For PDP Models Psych /719 Jan 18, 2001.
Development of Expertise. Expertise We are very good (perhaps even expert) at many things: - driving - reading - writing - talking What are some other.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Chapter 6 Neural Network.
KNOWLEDGE MANAGEMENT UNIT II KNOWLEDGE MANAGEMENT AND TECHNOLOGY 1.
Organic Evolution and Problem Solving Je-Gun Joung.
CognitiveViews of Learning Chapter 7. Overview n n The Cognitive Perspective n n Information Processing n n Metacognition n n Becoming Knowledgeable.
Multitasking Computational Neuroscience NSCI 492 Spring 2008.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Software Design Process. What is software? mid-1970s executable binary code ‘source code’ and the resulting binary code 1990s development of the Internet.
COGNITIVE LEVEL OF ANALYSIS An Introduction. Cognitive Psychology studies: how the human mind comes to know things about the world AND how the mind uses.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.
Instructional Strategies
Knowledge Representation Techniques
OPERATING SYSTEMS CS 3502 Fall 2017
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Human MEMORY.
Chapter 11: Artificial Intelligence
CHAPTER 1 Introduction BIC 3337 EXPERT SYSTEM.
Chapter 11: Artificial Intelligence
IB Assessments CRITERION!!!.
Typical Person :^) Fall 2002 CS/PSY 6750.
INTELLIGENCE Issues Concerning the Nature, Assessment and Development
Presented by Marnee Loftin, MA; LSSP July 12, 2018
IN THE NAME OF “ALLAH” THE MOST BENIFICENT AND THE MOST MERCIFUL
Introduction Artificial Intelligent.
Typical Person :^) Fall 2002 CS/PSY 6750.
CHAPTER I. of EVOLUTIONARY ROBOTICS Stefano Nolfi and Dario Floreano
Learning linguistic structure with simple recurrent neural networks
The Network Approach: Mind as a Web
David Kauchak CS158 – Spring 2019
mastery Maths with Greater Depth at Rosehill Infant SCHOOL
Presentation transcript:

An Artificial Neural Network for Multi-Level Interleaved and Creative Serial Order Cognitive Behavior Steve Donaldson Department of Mathematics and Computer Science Samford University Birmingham. Alabama Birmingham. Alabama

Research ConcernExample Variable bindingSmolensky, 1990 Central executive functionBaddeley, 1992 Similarity matchingSloman & Rips, 1998 Emotional impact on decisionsDamasio, 1994 Case based reasoningKolodner, 1997 ChunkingLaird, Newell, & Rosenbloom, 1987 Strategy developmentAnumolu, Bray, & Reilly, 1997 Goal management and planningAlbus, 1991 Analogy developmentHofstadter, 1995 Temporal processingRosenblatt, 1964 Common sense reasoningSun, 1994 Mathematical reasoningAnderson, 1995 LanguageGupta & Dell, 1999 Credit assignmentHolland, 1995 Rule processingGoebel, 1991 CreativityHofstadter, 1995 Some Research Concerns Related to the Exploration of Intelligent Systems (Adapted from Donaldson, 1999)

 Solve multiple tasks within the framework of a composite, synergistic architecture  Act autonomously under the internal control of neural network type processes  Learn in a biologically realistic manner  Operate at a scale significantly larger than normally found in single purpose networks  Acquire knowledge in a manner consistent with biological constraints  Transfer information across tasks, thus dealing with new situations using previously acquired knowledge  Exhibit multiple memory modalities typical of human information processing  Perform lifetime plastic learning without catastrophic loss of previously acquired knowledge  Learn from internal as well as external stimuli Basic Requirements for Autonomous Systems

Some Cognitive Skills and Behaviors Exhibited by Humans Recognition Alphabet mastery Spelling Counting Acquisition of math facts Memorization of a script Basic motor skills Associative memory Rehearsal Multiple associations Free association Transcription Solving mathematical expressions Memory theatres Understanding simple pronoun referents Complex motion Proto-language reading comprehension Route following General inductive reasoning Multiple trains of thought Acquisition and deployment of external memory strategies Sophisticated non-stereotypical sequence processing

Suggested Comprehensive Explanatory Mechanisms Sequence creation via generalized variable binding Predictive learningInterleaved processing

Categorizing Cognitive Abilities by Required Mental Features Predictive Learning Alphabet mastery Spelling Acquisition of math facts Memorization of a script Basic motor skills Associative memory Multiple associations Interleaved Processing Free association Free association Transcription Transcription Route following Route following Memory theatres Memory theatres Multiple trains of thought Multiple trains of thought Complex motion Complex motion Rehearsal Rehearsal Recognition Sequence Creation Counting Counting Solving mathematical expressions Solving mathematical expressions Understanding simple pronoun referents Understanding simple pronoun referents Protolanguage reading comprehension Protolanguage reading comprehension General inductive reasoning General inductive reasoning Acquisition and deployment of external memory strategies Acquisition and deployment of external memory strategies Sophisticated non-stereotypical sequence processing Sophisticated non-stereotypical sequence processing

High-Level Schematic of the Major Sub-Systems

Detailed Model Schematic

A i (t+1) = I i (t) + E i (t+1) – Φ(t) – Ψ(t) M N A i (t) + ∑ ∑ F jk (t)W jk,i (t)if additive j=1 k=1 Internal activation: I i (t) = M N A i (t)∏ ∏ F jk (t)W jk,i (t)if multiplicative j=1 k=1 Where: F jk (t) = │F jk (t)│ if absolute value is to be used; F jk (t) otherwise (A i (t)-λ)γ if γ>0 Activation reduction: Φ(t) =A i (t)+γ if γ<0 and A i (t)≥0 A i (t)-γ if γ<0 and A i (t)<0 Activation reset: Ψ(t) = (A i (t)-λ)μ Model Computations A i (t)ifα min ≤ A i (t) ≤ α max Check bounds: A i (t) =α min ifA i (t) < α min α max ifA i (t) > α max Subsequent calculations only performed if firing probability φ ≥ random # in range [0-1]: Z i (t) = υ if υ ≠ 0; A i (t) otherwise F i (t) Threshold =Z i (t) if A i (t) ≥ τ; 0 otherwise F i (t) Bounded =Z i (t) if κ ≥ A i (t) ≥ τ ; 0 otherwise F i (t) Symmetric =Z i (t) if A i (t) ≥ τ; - Z i (t) if A i (t) ≤- τ; 0 otherwise Winner-take-all (tie goes to first neuron found): F i (t) =F i (t)if F i (t) > every other firing value; 0 otherwise A i (t) =A i (t)if F i (t) > every other firing value; λ if F i (t) λ General activation modification equation Firing Value Computation

General weight modification equations Weight decay:X jk,i (t) =W jk,i (t-1) - νW jk,i (t-1) If W jk,i (t-1) ≥ 0:W jk,i (t) = σ if W jk,i (t-1) ≥ σ and X jk,i (t) < σ; X jk,i (t) otherwise If W jk,i (t-1) -σ; X jk,i (t) otherwise Connection Training: N T jm (t) = η by default; ∑ F hk (t-1) if a training control module h is specified k=1 V i (t)= F i (t) if actual firing values are used; 1 otherwise V jk (t)= F jk (t) if actual firing values are used; 1 otherwise W jk,i (t) = W jk,i (t-1) + T jm (t)V jk (t-1)V i (t) β min if W jk,i (t) < β min Boundary check: W jk,i (t) =W jk,i (t)if β min ≤ W jk,i (t) ≤ β max β max if W jk,i (t) > β max Model Computations (continued)

Module Parameters

Module Parameters (continued)

Connection Parameters

Connection Parameters (continued)

Some Temporal Processing Concepts Pattern – a vector of values representing an idea or action in the model’s experience, typically treated as a 2D figure to aid in visualization and conceptualization. Pattern – a vector of values representing an idea or action in the model’s experience, typically treated as a 2D figure to aid in visualization and conceptualization. Sequence - temporally ordered collection of input/output patterns. Sequence - temporally ordered collection of input/output patterns. Recognition - the competence of a system to identify previously learned features or concepts with minimal ambiguity, possibly from partial sensory input, and in the absence of any singular temporal contextual reference; specifically, the retrieval of a previously stored version of a pattern from long-term recognition memory. Recognition - the competence of a system to identify previously learned features or concepts with minimal ambiguity, possibly from partial sensory input, and in the absence of any singular temporal contextual reference; specifically, the retrieval of a previously stored version of a pattern from long-term recognition memory. Predictive learning – an ability acquired by previous exposure to a sequence to reproduce patterns in that sequence based on the current state of a context module and the current input. Predictive learning – an ability acquired by previous exposure to a sequence to reproduce patterns in that sequence based on the current state of a context module and the current input. Interleaved processing – the production and use of temporally ordered information based on sequence hierarchies (e.g. sequence A is composed of sequences B and C, sequence B is composed of sequences C, D, and E, etc.). Interleaved processing – the production and use of temporally ordered information based on sequence hierarchies (e.g. sequence A is composed of sequences B and C, sequence B is composed of sequences C, D, and E, etc.). Sequence creation – production of a new sequence from an existing seed sequence and associations related to its members. Sequence creation – production of a new sequence from an existing seed sequence and associations related to its members.

Sample Pattern Representations Internal representation for the letter “A” Internal representation for a “boat”

Recognition in the Long-Term Memory Sub-System

Predictive Learning Context state (S i ) and input/output (I i ) changes in a predictive learning system Rosenblatt (1964)Elman (1990)

Acquisition of Math Facts Pattern set for restricted math fact learning Some basic math facts considered as temporal sequences Math fact learning represented as sequence completion

Script Learning as a Form of Prediction S 2 WE_THE_PEOPLE_OF_THE_UNITED_STATES,_IN_ORDER_ TO_FORM_A_MORE_PERFECT_UNION,_ESTABLISH_JUSTICE, _INSURE_DOMESTIC_TRANQUILITY,_PROVIDE_FOR_THE_ COMMON_DEFENSE,_PROMOTE_THE_GENERAL_WELFARE,_ AND_SECURE_THE_BLESSINGS_OF_LIBERTY_TO_OURSELVES _AND_OUR_POSTERITY,_DO_ORDAIN_AND_ESTABLISH_THIS_ CONSTITUTION_FOR_THE_UNITED_STATES_OF_AMERICA. █ S 1 A_PENNY_SAVED_IS_A_PENNY_EARNED█ Avoiding catastrophic interference via sparse neural firing in sequence context

Basic Motor Skills MuscleArm SegmentMovementMovement Code 1 UpperClockwise M1 2 UpperCounter-clockwise M2 3 LowerClockwise M3 4 LowerCounter-clockwise M4 M1 M2 M3 M4 M5 M6 M7 M8 Muscle control patterns for a simple arm

Two Simple Movement Sequences A “reaching” sequence A “putting” sequence

Associative Memory via Predictive Learning Some learned associations Associative Recall

Recall results from several multiple association tests when probing with [mts]_ _ and [water]_ _ Two sets of learned multiple associations Multiple Associations Based on Probabilistic Firing in the Sequence Context Module

Short-Term Priority Memory Stylized view of short-term priority module activation gradient changes over time in the process of generating the strokes in the letters of the sequence CAT.

Associative Memory via Predictive Learning Some learned associations

Free Association A trace of the pattern perception module An associative tale A trace of the collective microfeatures module

Multiple Trains of Thought Learned sequences “Thinking” several thoughts The effect of parameter adjustment on recall order

A Route Following Experiment From ToHighway Dumas, Texas (DU TX)Raton, New Mexico (RAT NM) US64 Glenwood Springs, Colorado (GS CO)Aspen, Colorado (ASP CO)CO82 Birmingham, AL (BIR AL)Memphis, Tennessee (ME TN)US78 Raton, New Mexico (RAT NM)Denver, Colorado (DEN CO)I25 Amarillo, Texas (AM TX)Dumas, Texas (DU TX)US87 Memphis, Tennessee (ME TN)Amarillo, Texas (AM TX)I40 Denver, Colorado (DEN CO)Glenwood Springs, Colorado (GS CO)I70 Localized route sub-sequences lacking global order

Route Following Via Interleaved Processing Correctly ordered route recall after learning randomly ordered components

Learning for a Transcription Experiment An interleaved processing hierarchy Patterns Sequences

Transcribing a “Thought”

Complex Motion Muscle control output for a complex motion

Memory Theatres Conceptual approaches to temporal knowledge representation for memory theatres

Story Telling Using Memory Theatres

Several Approaches to “Rehearsal” E934█E934█9██3██4███9██3███4█████████9██████3█████4███████E934█9██3██4██████9███ █3████4███████████9███████3███████4█████████████████9████████████3████████████4█ ████████████████████████████████████████████████████████████████████████████████ ████████████████████████████████████ Π #Π█Π #Π█3██.██1██4██1██5███9███#███Π #Π█3██.██1██4██1██ 5███9███#███Π #Π█3██.██1██4██1██5███9███#███Π #Π█3██.██1██4██1█ █5███9███#███Π #Π█3██.██1██4██1██5███9███#██ ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ # █ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ΠΠ #█ An indication of how “rehearsal” results can depend on sequence format The results of another approach to “rehearsal” One approach to sequence “repetition” via interleaved processing Pattern set for “rehearsal” simulations

Sequence Creation P 41 P 42 P 43 … P 4S P 31 P 32 P 33 … P 3R P 21 P 22 P 23 … P 2N P 11 P 12 P 13 … P 1M P 1 P 2 P 3 P 4 … P 1M P 2N P 3R P 4S … Seed Sequence Created Sequence Previously Learned Sequences

Solving Mathematical Expressions Additional sequence learning requirements A trace of patterns produced during the solution of a mathematical expression

Protolanguage Reading Comprehension Assimilating letters into words and concepts Patterns required for a reading experiment Previously learned sequences necessary for reading Donaldson, Steve (2003a). An artificial neural network model for reading comprehension. In Arabnia, H., Joshua, R., & Mun, Y. (Eds.), Proceedings of the Internal Conference on Artificial Intelligence, Volume 1. Las Vegas, NV: CSREA Press.

General Inductive Reasoning Patterns used in an inductive reasoning experiment Sequence learning foundation for inductive reasoning Observations preceding inductive rule formation

Sample Details from an Inductive Rule Creation Process Trial 1 Trial 2 Trial 5 Trial 12

Inductive Rule Formation and Application An inductive rule formed via sequence creation Additional sequence learning for inductive rule application Application of a rule learned via inductive reasoning

External Memory Strategies Targets Objects Destination Relations Strategy Relations Control Patterns Object-Target Categorization

Observations preceding formation of a memory strategy Sequence Learning for an External Memory Strategies Experiment

Trial 1 Trial 8 Trial 10 Learning by example as a foundation for the creation of external memory strategies

External memory strategies learned by example Some additional facts to be learned before strategy application Recall and application of an external memory strategy Applying a Learned External Memory Strategy

A Non-Stereotypical Sequence Processing Experiment in the Domain of Music Lo C Lo D Lo E Lo F Lo G Lo A Lo B Mid C Mid D Mid E Mid F Mid G Mid A Mid B Hi C Hi D Hi E Hi F Hi G Hi A Hi B Note to keyboard position transformation maps and a phrase from a song Key designations for the three octaves mapped below Model Expansion to accommodate embedded sequences Donaldson, Steve (2003b). A neural network for high-level cognitive control of serial order behavior. In Ventura, D. & Das, S. (Eds.), Proceedings of the 7th Joint Conference on In- formation Sciences (6th International Conference on Computational Intelligence and Natural Computing). Research Triangle Park, NC: Association for Intelligent Machinery.

Non-Stereotypical Sequence Processing “Playing” a song at a designated octave as a form of NSTSP Flowchart of NSTSP processing in the domain of music Donaldson, Steve (2003b). A neural network for high-level cognitive control of serial order behavior. In Ventura, D. & Das, S. (Eds.), Proceedings of the 7th Joint Conference on In- formation Sciences (6th International Conference on Computational Intelligence and Natural Computing). Research Triangle Park, NC: Association for Intelligent Machinery.

Counting Patterns for a counting experiment Sequences learned as a foundation for counting Representing item abstraction for a counting task Results of counting the members of a group of people

Understanding Simple Pronoun Referents Simple pronoun to antecedent conversion

Results Explore low level cognitive mechanisms Explore low level cognitive mechanisms Maintain close ties to biological systems Maintain close ties to biological systems Seek generic principles subserving intelligence Seek generic principles subserving intelligence Evaluate a parsimonious approach to systems design Evaluate a parsimonious approach to systems design Investigate foundations for high-level cognition Investigate foundations for high-level cognition Explore interaction of multiple memory modalities Explore interaction of multiple memory modalities Demonstrate sufficiency of the proposed foundation Demonstrate sufficiency of the proposed foundation

The End!

Alternate Approaches to Modeling External memory Strategies From Anumolu, Bray, and Reilly (1997)

Considerations for the Formation of Bidirectional Sequences From Yarbus (1967) in Kandel, Schwartz, & Jessell (1995)