Modeling Situated Language Learning in Early Childhood via Hypernetworks Zhang, Byoung-Tak 1,2, Lee, Eun Seok 2, Heo, Min-Oh 1, and Kang, Myounggu 1 1.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
CSCTR Session 11 Dana Retová.  Start bottom-up  Create cognition based on sensori-motor interaction ◦ Cohen et al. (1996) – Building a baby ◦ Cohen.
Teaching an Agent by Playing a Multimodal Memory Game: Challenges for Machine Learners and Human Teachers AAAI 2009 Spring Symposium: Agents that Learn.
Chapter Thirteen Conclusion: Where We Go From Here.
Higher Coordination with Less Control – A Result of Information Maximization in the Sensorimotor Loop Keyan Zahedi, Nihat Ay, Ralf Der (Published on: May.
Theeraporn Ratitamkul, University of Illinois and Adele E. Goldberg, Princeton University Introduction How do young children learn verb meanings? Scene.
Readers routinely represent implied object rotation: The role of visual experience Wassenberg & Zwaan, in press, QJEP Brennan Payne Psych
An Introduction to Artificial Intelligence Presented by : M. Eftekhari.
The Nature of Learner Language
Bruner’s Approach Objectives: Outline Bruner’s concept of scaffolding.
A Child’s World: Infancy Through Adolescence , Ninth Edition
Organizational Notes no study guide no review session not sufficient to just read book and glance at lecture material midterm/final is considered hard.
Introduction to Cognitive Science Sept 2005 :: Lecture #1 :: Joe Lau :: Philosophy HKU.
Introduction to Cognitive Science Lecture #1 : INTRODUCTION Joe Lau Philosophy HKU.
What is Cognitive Science? … is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience,
1 Keeping Track: Coordinating Multiple Representations in Programming Pablo Romero, Benedict du Boulay & Rudi Lutz IDEAs Lab, Human Centred Technology.
COGN1001 Introduction to Cognitive Science Sept 2006 :: Lecture #1 :: Joe Lau :: Philosophy HKU.
Summer 2011 Wednesday, 8/3. Biological Approaches to Understanding the Mind Connectionism is not the only approach to understanding the mind that draws.
What is Cognitive Science? … is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience,
Mind Maps A mind map is a diagram used to represent words, ideas, tasks, or other items linked to and arranged around a central key word or idea.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Chapter 2: Modeling mental imagery. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 The ingredients Encountered some of the basic.
Please have your volume turned up. You do not need to hit any buttons as this presentation will progress on its own and as intended.
Cognitive Psychology, 2 nd Ed. Chapter 1. Defining Cognitive Psychology The study of human mental processes and their role in thinking, feeling, and behaving.
PROCESSING APPROACHES
Cognitive Learning and the Multimodal Memory Game: Toward Human-Level Machine Learning 2008 IEEE World Congress on Computational Intelligence (WCCI 2008)
X Language Acquisition
MIND: The Cognitive Side of Mind and Brain  “… the mind is not the brain, but what the brain does…” (Pinker, 1997)
Embodied Machines The Grounding (binding) Problem –Real cognizers form multiple associations between concepts Affordances - how is an object interacted.
Mark van Rossum Mark van Rossum Edinburgh.
Situation Models and Embodied Language Processes
Artificial Intelligence By Michelle Witcofsky And Evan Flanagan.
Self-Assemblying Hypernetworks for Cognitive Learning of Linguistic Memory Int. Conf. on Cognitive Science, CESSE-2008, Feb. 6-8, 2008, Sheraton Hotel,
Understanding Action Verbs- Embodied Verbal Semantics Approach Pavan Kumar Srungaram M.Phil Cognitive Science (09CCHL02) Supervisor: Prof. Bapi.
ABSTRACT Two experiments comparing the efficiency of four different retention strategies for vocabulary-based learning were examined on a total 295 undergraduate.
October 2005CSA3180 NLP1 CSA3180 Natural Language Processing Introduction and Course Overview.
Welcome to Unit 3: Art and the Developing Young Child
Intelligent Robot Architecture (1-3)  Background of research  Research objectives  By recognizing and analyzing user’s utterances and actions, an intelligent.
Visualizing textual data CPSC A. Butt / Feb. 26 '09.
1 Viewing Vision-Language Integration as a Double-Grounding case Katerina Pastra Department of Computer Science, Natural Language Processing Group, University.
Lecture 2 – History of Cognition 1 Three topics: 1.Pre-history of study of cognition 2. Early history 3. Recent history.
Chapter 1. Cognitive Systems Introduction in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans Park, Sae-Rom Lee, Woo-Jin Statistical.
From Mind to Brain Machine The Architecture of Cognition David Davenport Computer Eng. Dept., Bilkent University, Ankara – Turkey.
L&I SCI 110: Information science and information theory Instructor: Xiangming(Simon) Mu Sept. 9, 2004.
The Unreasonable Effectiveness of Data
Cognitive Psychology What is cognitive psychology?
Intelligent MultiMedia Storytelling System (IMSS) - Automatic Generation of Animation From Natural Language Input By Eunice Ma Supervisor: Prof. Paul Mc.
Grounded cognition. Barsalou, L. W. (2008). Annual Review of Psychology, 59, Grounded theories versus amodal representations. – Recapitulation.
Early Cognitive Development
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
Chapter 1. Introduction in Creating Brain-like intelligence, Sendhoff et al. Course: Robots Learning from Humans Bae, Eun-bit Otology Laboratory Seoul.
From NARS to a Thinking Machine Pei Wang Temple University.
제 9 주. 응용 -4: Robotics Synthesis of Autonomous Robots through Evolution S. Nolfi and D. Floreano, Trends in Cognitive Science, vol. 6, no. 1, pp. 31~37,
JIVE Integration of HCP Data Qunqun Yu Dr. Steve Marron, Dr. Kai Zhang & Dr. Ben Risk University of North Carolina at Chapel Hill.
Artificial Intelligence DNA Hypernetworks Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Effects of Reading on Word Learning
Perception & Imagination:
What is cognitive psychology?
Core Vocabulary Automaticity
Interdisciplinary research on language & speech
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking (1/2) Creating Brain-Like Intelligence, Sendhoff et al. (eds), Robots Learning from.
Cognitive Learning and the Multimodal Memory Game: Toward Human-Level Machine Learning 2008 IEEE World Congress on Computational Intelligence (WCCI 2008)
Early Cognitive Development
Using Natural Language Processing to Aid Computer Vision
Institute of Computing Technology
The Network Approach: Mind as a Web
Child Development: Chapter 7 Cognitive Development
Information Retrieval
Presentation transcript:

Modeling Situated Language Learning in Early Childhood via Hypernetworks Zhang, Byoung-Tak 1,2, Lee, Eun Seok 2, Heo, Min-Oh 1, and Kang, Myounggu 1 1 School of Computer Science & Engineering, Seoul National University 2 Interdisciplinary Program in Cognitive Science, Seoul National University {btzhang, eslee, mgkang, Method Simulation Experiment Results References Data Sets Biointelligence Lab, Seoul National University, Seoul , Korea ( Thomas is a tank engine who lives on the island of Sodor. He has six small wheels, a short funnel a short boiler and a short dome. Season 1, 1 st Video (40 pairs) Edward was in the shed with the engines. They were all bigger than him and boasted about it. The driver won't pick you, said Gordon. He wants strong engines. Season 1, 2 nd Video (40 pairs) Season 1 10 videos (398 pairs) Barsalou, L. W. (2008). Grounded cognition, Ann. Rev. Psych., 59, Griffiths, T., Steyvers, M., & Tenenbaum, J., (2007) Topics in semantic representation, Psych. Rev., 114(2), 211–244. Knoeferle, P., & Crocker, M. W. (2006). The coordinated interplay of scene, utterance, and world knowledge: Evidence from eye-tracking, Cog. Sci., 30, Spivey, J. M. (2007). The Continuity of Mind, Oxford Univ. Press. Yu, C, Smith, L. B., & Pereira, A. F. (2008). Grounding word learning in multimodal sensorimotor interaction, CogSci-2008, pp Zhang, B.-T. (2008). Hypernetworks: a molecular evolutionary architecture for cognitive learning and memory. IEEE Comp. Intell. Mag., 3(3), Zwann, R. A. & Kaschak, M. P. (2008). Language in the brain, body, and world. Chap. 19, Cambridge Handbook of Situated Cognition. ·············· Background Human language is grounded, i.e. embodied and situated in the environment (Barsalou, 2008; Zwaan & Kaschak, 2008) Grounded language models rely on multimodal sensory data (Knoeferle & Crocker, 2006; Spivey, 2007; Yu et al. 2008) Language grounding requires a flexible modeling tool that can deal with high-dimensional data ······ Research Questions What kinds of grounded, linguistic “concept map” are cognitive-computationally plausible and child-learning friendly? How does the “multimodal” concept map of a young child evolve as being incrementally exposed to situated language environments? What kinds of cognitive learning algorithms can naturally simulate the situated language learning process? Can we use the situated concept-map building process for endowing AI agents with language learning capability? ········ Key Ideas in This Work ······ Hypernetwork structure (Zhang, 2008) as a multimodal concept map Population coding as an incremental learning-friendly representation Cartoon videos as a surrogate for situated child language corpora 1. Define linguistic words and visual words 2. Learn incrementally a hypernetwork on the videos 3. Plot the multimodal concept map for a given query · Example sentences generated from the learned concept map (Data 1) · Evolution of the multimodal concept map (MCM) for Data 2 · MCM for ‘Station’ after learning the 1st video MCM for ‘Station’ after learning the 2nd video MCM for ‘Station’ after learning the videos 1-4 MCM for ‘Station’ after learning the videos Generated mental images per each word Dataset Sequence Generated mental images by a sentence Fat · People · Went · Station · Fat people went station · Mental images generated from the learned multimodal concept map (Data 2) · Data 1: Educational animation video scripts for children (text only) 33,910 sentences, 6,124 word types, 208,116 word tokens; 10 learning stages according to the recommended language levels of difficulty Data 2: Video of Thomas & Friends, Season 1, Numbers pairs of the dialogue sentence and its corresponding image ···· - Excerpted sentences from earlier learning stages to later ones with evolving concept association - The typical proportion (%) and SD of semantically well-structured sentences from 100 random sentences generation task with developmental learning and improvised learning. (By human scaling measurement)