Self-Assemblying Hypernetworks for Cognitive Learning of Linguistic Memory Int. Conf. on Cognitive Science, CESSE-2008, Feb. 6-8, 2008, Sheraton Hotel,

Slides:



Advertisements
Similar presentations
Teaching an Agent by Playing a Multimodal Memory Game: Challenges for Machine Learners and Human Teachers AAAI 2009 Spring Symposium: Agents that Learn.
Advertisements

Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Chapter Thirteen Conclusion: Where We Go From Here.
Unit 8. Find a new word in this unit to match the definition given.
Patch to the Future: Unsupervised Visual Prediction
Fast and Compact Retrieval Methods in Computer Vision Part II A. Torralba, R. Fergus and Y. Weiss. Small Codes and Large Image Databases for Recognition.
FACT: A Learning Based Web Query Processing System Hongjun Lu, Yanlei Diao Hong Kong U. of Science & Technology Songting Chen, Zengping Tian Fudan University.
Please come inside and explore the universe with us. WELCOME TO MS. LEE’S FifTH GRADE SCIENCE CLASS.
SE320: Introduction to Computer Games Week 8: Game Programming Gazihan Alankus.
Metacognitive Issues in Student Engagement Jennifer Berg & Laura Baker.
Tell the robot exactly how to draw a square on the board.
Hypernetwork Models of Memory Hypernetwork Models of Memory Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering Brain.
Cognitive Learning and the Multimodal Memory Game: Toward Human-Level Machine Learning 2008 IEEE World Congress on Computational Intelligence (WCCI 2008)
Interference in Short-Term Memory The Magical Number Two (or Three) in Sentence Processing ` (Sat.) / Chan-hoon Park Hypernetwork Models of Learning.
CS 101 – Aug. 26 Definition of computer & CS Making good decisions Computer organization A little history Please read Chapter 1 in book. Tomorrow’s lab:
Understanding Gender Differences Have you ever wondered if men and women were not the same species? Well, stop scratching your head. There are indeed many.
A Human Eye Retinal Cone Synthesizer Michael F. Deering.
A Simple Unsupervised Query Categorizer for Web Search Engines Prashant Ullegaddi and Vasudeva Varma Search and Information Extraction Lab Language Technologies.
Chapter 1. Introduction in Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Jo, HwiYeol Biointelligence Laboratory.
Percent This slide needs the title “Percent”, your name, and two pictures that represent percent. Choose a nice background and apply it to all of your.
GOOD AFTERNOON ! 10C4 WHO IS SHE? 1. She is one of the teachers in our school. 1. She is one of the teachers in our school. 3. She has long black hair.
Learning with Hypergraphs: Discovery of Higher-Order Interaction Patterns from High-Dimensional Data Moscow State University, Faculty of Computational.
Please come inside and explore the universe with us.
Unit 1 It was great to see her again. Module 2. 1.Which of the buildings in your school do you like best? Why? 2. What buildings or special rooms does.
Fast Forward Ten Years into the life of Erin Presented by: Erin Nielsen Project 9 5/23/11.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
1 Discovery and Neural Computation Paul Thagard University of Waterloo.
What type of learner are you? Test yourself to find out nts/self-assessments/learning-styles- quiz.shtml Simple steps.
Simulation and Modeling: Predator-Prey Processing Lab IS 101Y/CMSC 101 Computational Thinking and Design Tuesday, September 23, 2014 Carolyn Seaman Susan.
Evolving Hypernetworks for Language Modeling AI Course Material Oct. 12, 2009 Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and.
Why IR test collections are so bad Mark Sanderson University of Sheffield.
Multilingual Information Retrieval using GHSOM Hsin-Chang Yang Associate Professor Department of Information Management National University of Kaohsiung.
Modeling Situated Language Learning in Early Childhood via Hypernetworks Zhang, Byoung-Tak 1,2, Lee, Eun Seok 2, Heo, Min-Oh 1, and Kang, Myounggu 1 1.
Module 1 How to learn English
A Multilingual Hierarchy Mapping Method Based on GHSOM Hsin-Chang Yang Associate Professor Department of Information Management National University of.
© 2015 albert-learning.com How to talk to your boss How to talk to your boss!!
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
Relationship Situations Go to the corner with the corresponding number of the answer that most closely relates to you.
Ask for Advice: What should I do? Ask for advice: What should I do?
Brain, Mind, and Computation Part III: Cognitive Brain Networks Brain, Mind, and Computation Part III: Cognitive Brain Networks Brain-Mind-Behavior Seminar.
Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 18, 2011.
BEN CARSON GIFTEDHANDS… BY:BEN CARSON REPORT DONE BY: MAYRA HERNANDEZ Photo credit.
Brain, Mind, and Computation Part III: Cognitive Brain Networks Brain, Mind, and Computation Part III: Cognitive Brain Networks Brain-Mind-Behavior Seminar.
Oral Assessment. Task 1: p.10 You’re traveling You get on the train Find a seat Start to talk to someone.
Learning to Answer Questions from Image Using Convolutional Neural Network Lin Ma, Zhengdong Lu, and Hang Li Huawei Noah’s Ark Lab, Hong Kong
Session 5: How Search Engines Work. Focusing Questions How do search engines work? Is one search engine better than another?
Artificial Intelligence DNA Hypernetworks Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
2007 年普通高等学校招生考试英语试题 听力部分 全国卷 I. 1. Who is coming for tea? A. John. B. Mark. C. Tracy. W: John, is Mark coming for tea tomorrow? M: Yes, I told you yesterday,
The Scientific Method.
SectionB If you go to the party, you’ll have a great time!
Molecular Computational Engines of Intelligence The Second Joint Symposium on Computational Intelligence (JSCI) Jan. 19, 2006, KAIST, Korea Byoung-Tak.
GROWTH MINDSET.
Do-Gil Lee1*, Ilhwan Kim1 and Seok Kee Lee2
Artificial Intelligence
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking (1/2) Creating Brain-Like Intelligence, Sendhoff et al. (eds), Robots Learning from.
English Conversation I – Correction Techniques
DNA Computing and Molecular Programming
Unit 4 - Socialising A. (be) Supposed to, (was/were) going to
Cognitive Learning and the Multimodal Memory Game: Toward Human-Level Machine Learning 2008 IEEE World Congress on Computational Intelligence (WCCI 2008)
Computer Lab Directions for Electricity Projects
Subhayu Basu et al. , DNA8, (2002) MEC Seminar Su Dong Kim
GRAMMAR TASK INFORMATION
5.16 Rereading Your Draft and Drawing on All You Know to Revise
Presentation Companion Slide Pack
Topic: Application of cognitive theory in classroom Submitted to: Shahzad Mughal Submitted By: Rida Amajad( )
Gratefulness.
Representation Learning with Deep Auto-Encoder
Guideline Try to summarize to express explicitly
Actively Learning Ontology Matching via User Interaction
Presentation transcript:

Self-Assemblying Hypernetworks for Cognitive Learning of Linguistic Memory Int. Conf. on Cognitive Science, CESSE-2008, Feb. 6-8, 2008, Sheraton Hotel, Cairo, Egypt Byoung-Tak Zhang and Chan-Hoon Park Biointelligence Laboratory School of Computer Science and Engineering Cognitive Science, Brain Science, and Bioinformatics Programs Seoul National University Seoul , Korea

Talk Outline A Language Game Learning the Linguistic Memory  The Hypernetwork Model of Language Sentence Recall Experiments Extension to Multimodal Memory Game (Language + Vision) Conclusion

© 2008, SNU Biointelligence Lab, 3 The Language Game Platform

© 2008, SNU Biointelligence Lab, 4 A Language Game ? still ? believe ? did this.  I still can't believe you did this. We ? ? a lot ? gifts.  We don't have a lot of gifts.

© 2008, SNU Biointelligence Lab, 5 Text Corpus: TV Drama Series Friends, 24, House, Grey Anatomy, Gilmore Girls, Sex and the City 289,468 Sentences (Training Data) 700 Sentences with Blanks (Test Data) I don't know what happened. Take a look at this. … What ? ? ? here. ? have ? visit the ? room. …

© 2008, SNU Biointelligence Lab, 6 Step 1: Learning a Linguistic Memory k = 2 k = 3 k = 4 … Hypernetwork Memory

© 2008, SNU Biointelligence Lab, 7 Step 2: Recalling from the Memory Heismybestfriend bestfriendmybestismyHeis Heisa?friend HeisastrongboyStrongfriendlikesprettygirl Heis aastrong boyStrongfriend likes strongprettygirlStrongfriendastrongbestfriend Heisastrongfriend X7 X6 X5 X8 X1 X2 X3 X4 Recall Self-assembly Storage

© 2008, SNU Biointelligence Lab, 8 x1 =1 x2 =0 x3 =0 x4 =1 x5 =0 x6 =0 x7 =0 x8 =0 x9 =0 x10 =1 x11 =0 x12 =1 x13 =0 x14 =0 x15 =0 y = 1 x1 =0 x2 =1 x3 =1 x4 =0 x5 =0 x6 =0 x7 =0 x8 =0 x9 =1 x10 =0 x11 =0 x12 =0 x13 =0 x14 =1 x15 =0 y = 0 x1 =0 x2 =0 x3 =1 x4 =0 x5 =0 x6 =1 x7 =0 x8 =1 x9 =0 x10 =0 x11 =0 x12 =0 x13 =1 x14 =0 x15 =0 y =1 4 sentences (with labels) x4x4 x 10 y=1x1x1 x4x4 x 12 y=1x1x1 x 10 x 12 y=1x4x4 x3x3 x9x9 y=0x2x2 x3x3 x 14 y=0x2x2 x9x9 x 14 y=0x3x3 x6x6 x8x8 y=1x3x3 x6x6 x 13 y=1x3x3 x8x8 x 13 y=1x6x x1 =0 x2 =0 x3 =0 x4 =0 x5 =0 x6 =0 x7 =0 x8 =1 x9 =0 x10 =0 x11 =1 x12 =0 x13 =0 x14 =0 x15 =1 y =1 4 x 11 x 15 y=0x8x8 4 Round 1 Round 2 Round 3

© 2008, SNU Biointelligence Lab, 9 x1 x2 x3 x4 x5 x6 x7 x8x9 x10 x11 x12 x13 x14 x15 The Hypernetwork Memory [Zhang, DNA ]

© 2008, SNU Biointelligence Lab, 10 Molecular Self-Assembly of Hypernetworks xixi xjxj y X7 X6 X5 X8 X1 X2 X3 X4 Hypernetwork Representation Molecular Encoding DNA Computing

© 2008, SNU Biointelligence Lab, 11 Experimental Setup The order (k) of an hyperedge  Range: 2~4  Fixed order for each experiment The method of creating hyperedges from training data  Sliding window method  Sequential sampling from the first word The number of blanks (question marks) in test data  Range: 1~4  Maximum: k - 1

© 2008, SNU Biointelligence Lab, 12 Learning Behavior Analysis (1/3) The performance monotonically increases as the learning corpus grows. The low-order memory performs best for the one-missing-word problem.

© 2008, SNU Biointelligence Lab, 13 Learning Behavior Analysis (2/3) The medium-order (k=3) memory performs best for the two-missing-words problem.

© 2008, SNU Biointelligence Lab, 14 Learning Behavior Analysis (3/3) The high-order (k=4) memory performs best for the three-missing-words problem.

© 2008, SNU Biointelligence Lab, 15 The Language Game: Results Why ? you ? come ? down ?  Why are you go come on down here ? think ? I ? met ? somewhere before  I think but I am met him somewhere before ? appreciate it if ? call her by ? ?  I appreciate it if you call her by the way I'm standing ? the ? ? ? cafeteria  I'm standing in the one of the cafeteria Would you ? to meet ? ? Tuesday ?  Would you nice to meet you in Tuesday and ? gonna ? upstairs ? ? a shower  I'm gonna go upstairs and take a shower ? have ? visit the ? room  I have to visit the ladies' room We ? ? a lot ? gifts  We don't have a lot of gifts ? ? don't need your ?  If I don't need your help ? ? ? decision  to make a decision ? still ? believe ? did this  I still can't believe you did this What ? ? ? here  What are you doing here ? you ? first ? of medical school  Are you go first day of medical school ? ? a dream about ? In ?  I had a dream about you in Copenhagen

Extension to Multimodal Memory Game

© 2008, SNU Biointelligence Lab, Image Sound Text But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. Memorizing Images to Retrieve Texts Memorizing Texts to Retrieve Images Hint Text But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. But, I'm getting married tomorrow Well, maybe I am... I keep thinking about you. And I'm wondering if we made a mistake giving up so fast. Are you thinking about me? But if you are, call me tonight. Image Hint Image Generation Game Text Generation Game Machine Learner How can it be done? Scene1Heisbestfriend Scene2Sheisstrongboy Scene3friendlikesprettygirl Scene1a1a2a3a4 Scene2b1b2b3b4 Scene3c1c2c3c4TextImage Text: N sequential samples Image: N random samples Heisa1a3 a4 bestfrienda1 a3a2 Multimodal HN Sheisb1b3b5 isstrongb2 b3 b1 Heisbestfriend Text Query Heisa1a3 a4 bestfrienda1 a3a4 isbestb2 b3 b1 Matching Heisbestfriend Text Query Heisa1a3 a4 bestfrienda1 a3a4 isbestb2 b3 b1 Matching Generating an Image Map a1=2 b1=1 b2=1 a3=2 b3=1 a4=2 voting Mapa1b2 a3 a4 Mapa1b2 a3 a4 Scene1a1a2a3a4 Scene2b1b2b3b4 Scene3c1c2c3c4Image Hamming Distance Scene1a1a2a3a4 Mapa1b2 a3 a4

© 2008, SNU Biointelligence Lab, 18 Image-to-Text Crossmodal Recall Text Learning by Viewing Image - Where am I giving birth - You guys really don't know anything - So when you guys get in there - I know it's been really hard for you - … User Text Corpus Question: Answer: Where am I giving birth Where ? I giving ?

© 2008, SNU Biointelligence Lab, 19 Text-to-Image Crossmodal Recall Text Learning by Viewing Image Corpus User Question: Answer: Image You've been there - Where am I giving birth - You guys really don't know anything - So when you guys get in there - I know it's been really hard for you - …

© 2008, SNU Biointelligence Lab, 20 The Multimedia (Movie) Corpus Dataset: 2 dramas  Images and the corresponding scripts  Titles  Friends, Prison Break  Training data: 2,808 images and scripts  Image size: 80 x 60 = 4800 pixels  Vocabulary: 2,579 words Where am I giving birth I know it's been really hard for you So when you guys get in there

© 2008, SNU Biointelligence Lab, 21 Experimental Setup The order (k) of memory units  Text: k = 2, 3, 4  Image: k = 10, …, 340 Constructing hyperedges from training data  Text: Sequential sampling from a random position  Image: Random sampling from 4,800 pixel positions The number of repetitive samples from an image-text pair  N = 150, …, 300

Answer Query I don't know what happened There's a kitty in my guitar case Maybe there's something I can do to make sure I get pregnant Maybe there's something there's something I … I get pregnant There's a a kitty in … in my guitar case I don't know don't know what know what happened Matching & Completion Image-to-Text Recall Examples © 2008, SNU Biointelligence Lab,

Query Matching & Completion I don't know what happened Take a look at this There's a kitty in my guitar case Maybe there's something I can do to make sure I get pregnant Answer Text-to-Image Recall Examples © 2008, SNU Biointelligence Lab,

24 Image to Text (Recall Rate) Image to Text (Recall Rate) Note: In the tolerant recall, the generated sentence is evaluated correct if the number of mismatches is within the specified tolerance level (here two words).

© 2008, SNU Biointelligence Lab, 25 Text to Image (Recall Rate) Text to Image (Recall Rate) Note: The retrieved image is evaluated correct if its hamming distance to the target image is the smallest.

© 2008, SNU Biointelligence Lab, 26 Conclusion Hypernetworks are a random graph model employing higher-order edges and allowing for a more natural representation for learning higher-order interactions. We introduce a linguistic memory model based on a self-organizing hypernetwork inspired by mental chemistry. The hypernetwork stores the sentences in random fragments and recalls a sentence by self-assemblying them given a partial, query sentence. Applied to a sentence corpus of 290K sentences, we obtain a recall performance of %, depending on the difficulty of the task. Cognitive plausibility:  “Multiple representations of partially overlapping micromodules which are partially active simultaneously” [Fuster, 2003]  Neural microcircuits [Grillner et al, 2006]  Cognitive schema or cognitive code [Tse et al., 2007]

Data Acquisition and Experimentation Text: Ha-Young Jang Image: Min-Oh Heo Sun Kim Joo-Kyoung Kim Ho-Sik Seok Kwonil Kim Sang-Yoon Lee Supported by - National Research Lab Program of Min. of Sci. & Tech. ( ) - Next Generation Tech. Program of Min. of Ind. & Comm. ( ) - BK21-IT Program of Min. of Education ( ) - SK Telecom ( ) More Information at -  Research  MMG (to be open soon) Acknowledgements

© 2008, SNU Biointelligence Lab, 29 The Hypernetwork Model of Learning [Zhang, 2006]

© 2008, SNU Biointelligence Lab, 30 Deriving the Learning Rule

© 2008, SNU Biointelligence Lab, 31 Derivation of the Learning Rule

Molecular Self-Assembly

© 2008, SNU Biointelligence Lab, 33 Encoding a Hypernetwork with DNA z 1 : z 2 : z 3 : z 4 : b) x1x1 x2x2 x3x3 x4x4 x5x5 y 0 1 where z 1 : (x 1 =0, x 2 =1, x 3 =0, y=1) z 2 : (x 1 =0, x 2 =0, x 3 =1, x 4 =0, x 5 =0, y=0) z 3 : (x 2 =1, x 4 =1, y=1) z 4 : (x 2 =1, x 3 =0, x 4 =1, y=0) a) AAAACCAATTGGAAGGCCATGCGG AAAACCAATTCCAAGGGGCCTTCCCCAACCATGCCC AATTGGCCTTGGATGCGG AATTGGAAGGCCCCTTGGATGCCC GG AAAA AATT AAGG CCTT CCAA ATGC CC Collection of (labeled) hyperedges Library of DNA molecules corresponding to (a)

© 2008, SNU Biointelligence Lab, 34 DNA Molecular Computing Self-assembly Heat Cool Polymer Repeat Self-replication Molecular recognitionNanostructure