Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 18, 2011.

Slides:



Advertisements
Similar presentations
Topic Nerves.
Advertisements

Introduction to Neural Networks
CSC2535: 2013 Advanced Machine Learning Lecture 8b Image retrieval using multilayer neural networks Geoffrey Hinton.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 21 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
Intelligent Environments1 Computer Science and Engineering University of Texas at Arlington.
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
Functional Link Network. Support Vector Machines.
Artificial Neural Networks - Introduction -
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Artificial Neural Networks - Introduction -
1Neural Networks B 2009 Neural Networks B Lecture 1 Wolfgang Maass
Exam and Assignment Dates Midterm 1 Feb 3 rd and 4 th Midterm 2 March 9 th and 10 th Final April 20 th and 21 st Idea journal assignment is due on last.
Course Business Extra Credit available through Psych Subject Pool (and occasionally other experiments) –Up to 2 extra points (i.e. two experiments) –Some.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
First, some philosophy I see, I hear, I feel… Who is I - Do you mean my brain sees, hears, and feels or do you mean something else?
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CIAR Second Summer School Tutorial Lecture 2b Autoencoders & Modeling time series with Boltzmann machines Geoffrey Hinton.
Cognitive Learning and the Multimodal Memory Game: Toward Human-Level Machine Learning 2008 IEEE World Congress on Computational Intelligence (WCCI 2008)
How to do backpropagation in a brain
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
1 6. Feed-forward mapping networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
1 1. Introduction Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering Graduate.
1 2. Neurons and Conductance-Based Models Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Western Gateway Building, UCC
fMRI Methods Lecture 12 – Adaptation & classification
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Cognitive Science Overview Cognitive Science Defined The Brain Assumptions of Cognitive Science Cognitive Information Processing Cognitive Science and.
1 7. Associators and synaptic plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Lecture 2 Neurons, Muscles and Motor Units. Voluntary movement begins.... Brain Spinal cord Motor nerves Muscles.
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Nervous System Reflexes and Action Potential How do cells detect and respond to changes in their internal and external environment to successfully survive.
Lecture 5 Neural Control
Chapter 2. From Complex Networks to Intelligent Systems in Creating Brain-like Systems, Sendhoff et al. Course: Robots Learning from Humans Baek, Da Som.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Synaptic Plasticity Synaptic efficacy (strength) is changing with time. Many of these changes are activity-dependent, i.e. the magnitude and direction.
Chapter 4. Analysis of Brain-Like Structures and Dynamics (2/2) Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans 09/25.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
CSC321 Lecture 27 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Basics of Computational Neuroscience. What is computational neuroscience ? The Interdisciplinary Nature of Computational Neuroscience.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.
Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 14, 2012.
Neural Network Architecture Session 2
Byoung-Tak Zhang Biointelligence Laboratory
Artificial Intelligence (CS 370D)
OVERVIEW OF BIOLOGICAL NEURONS
Artificial Intelligence Chapter 3 Neural Networks
Computational neuroscience
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Neuronal Signals.
Artificial Intelligence Chapter 3 Neural Networks
Representation Learning with Deep Auto-Encoder
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Brain, Mind, and Computation Part I: Computational Brain Brain, Mind, and Computation Part I: Computational Brain Brain-Mind-Behavior Seminar May 18, 2011 Byoung-Tak Zhang Biointelligence Laboratory Computer Science and Engineering & Brain Science, Cognitive Science, and Bioinformatics Programs & Brain-Mind-Behavior Concentration Program Seoul National University

(c) 2009 SNU Biointelligence Laboratory, 2 Lecture Overview Part I: Computational Brain  How the brain encodes and processes information? Part II: Brain-Inspired Computation  How to build intelligent machines inspired by brain processes? Part III: Cognitive Brain Networks  How the brain networks perform cognitive processing?

© 2009, SNU CSE BioIntelligence Lab, 3 Human Brain: Functional Architecture Brodmann’s areas & functions

© 2009, SNU CSE BioIntelligence Lab, 4 Cortex: Perception, Action, and Cognition Fig 3-18 Primary sensory and motor cortex & association cortex

5 (c) SNU CSE Biointelligence Lab, Mind, Brain, Cell, Molecule Brain Cell Molecule Mind cells mol. memory

Computational Neuroscience

(C) 2009, SNU CSE Biointelligence Lab, 7 The Structure of Neurons

(C) 2006, SNU CSE Biointelligence Lab, 8 Information Transmission between Neurons Overview of signaling between neurons  Synaptic inputs  Synaptic inputs make postsynaptic current.  Passive depolarizing currents  Action potential: depolarize the membrane, and trigger another action potential.  The inward current conducted down the axon.  This leads to depolarization of adjacent regions of membrane

(C) 2006, SNU CSE Biointelligence Lab, 9 Voltage-gated channel in the neuronal membrane. Mechanisms of neurotransmitter receptor molecules.

10

Hodgkin-Huxley Model Hodgkin-Huxley model  C: capacitance  I(t): external current Three ionic currents 11 Fig. 2.7

12 (c) SNU CSE Biointelligence Lab, Molecular Basis of Learning and Memory in the Brain

Neuronal Connectivity 13

Associative Networks 14 Associative node and network architecture. (A) A simplified neuron that receives a large number of inputs r i in. The synaptic efficiency is denoted by w i. the output of the neuron, r out depends on the particular input stimulus. (B) A network of associative nodes. Each component of the input vector, r i in, is distributed to each neuron in the network. However, the effect of the input can be different for each neuron as each individual synapse can have different efficiency values w ij, where j labels the neuron in the network. Auto-associative node and network architecture. (A) Schematic illustration of an auto-associative node that is distinguished from the associative node as illustrated in Fig. 7.1A in that it has, in addition, a recurrent feedback connection. (B) An auto-associative network that consist of associative nodes that not only receive external input from other neural layers but, in addition, have many recurrent collateral connections between the nodes in the neural layer.

Principles of Brain Processing

Memory, Learning, and the Brain 기억과 학습은 뇌의 사고, 행동, 인지의 기반 메카니즘 McGaugh, J. L. Memory & Emotion: The Making of Lasting Memories, © 2009, SNU Biointelligence Lab, 16 It is our memory that enables us to value everything else we possess. Lacking memory, we would have no ability to be concerned about our hearts, achievements, loved ones, and incomes. Our brain has an amazing capacity to integrate the combined effects of our past experiences together with our present experiences in creating our thought and actions. This is all possible by the memory and the memories are formed by the learning process.

17 Memory Systems in the Brain Source: Gazzaniga et al., Cognitive Neuroscience: The Biology of the Mind, 2002.

Summary: Principles of Cognitive Learning © 2009, SNU Biointelligence Lab, 18 Continuity. Learning is a continuous, lifelong process. “The experiences of each immediately past moment are memories that merge with current momentary experiences to create the impression of seamless continuity in our lives” [McGaugh, 2003] Glocality. “Perception is dependent on context” and it is important to maintain both global and local, i.e. glocal, representations [Peterson and Rhodes, 2003] Compositionality. “The brain activates existing metaphorical structures to form a conceptual blend, consisting of all the metaphors linked together” [Feldman, 2006]. “Mental chemistry” [J. S. Mill] [Zhang, IEEE Computational Intelligence Magazine, 2008]

2. Multiple Levels of Representation Source: J. W. Rudy, The Neurobiology of Learning and Memory, 2008.

3. Creation of New Memory Source: J. W. Rudy, The Neurobiology of Learning and Memory, 2008.

21 (c) SNU CSE Biointelligence Lab, What is the information processing principle underlying human intelligence?

22 (c) SNU CSE Biointelligence Lab, Von Neumann’s The Computer and the Brain (1958) John von Neumann ( )

(c) 2008 SNU Biointelligence Laboratory, 23 Some Facts about the Brain Volume and mass: 1.35 liter & 1.35 kg Processors: neurons Communication: synapses Speed: sec  Computer: 1 GHz = sec Memory: 2.8 x bits  = 14 bits/sec x neurons x (2 x 10 9 ) sec (2 x 10 9 sec = 60 years of life time)  Computer disk: tera bits = bits Reliability: 10 4 neurons dying everyday Plasticity: biochemical learning

24 (c) SNU CSE Biointelligence Lab, Principles of Information Processing in the Brain The Principle of Uncertainty  Precision vs. prediction The Principle of Nonseparability “UN-IBM”  Processor vs. memory The Principle of Infinity  Limited matter vs. unbounded memory The Principle of “Big Numbers Count”  Hyperinteraction of neurons (or > molecules) The Principle of “Matter Matters”  Material basis of “consciousness” [Zhang, 2005]

Neural Computers

Learning to extract the orientation of a face patch (Salakhutdinov & Hinton, NIPS 2007)

The training and test sets for predicting face orientation 11,000 unlabeled cases100, 500, or 1000 labeled cases face patches from new people

Deep Autoencoders (Hinton & Salakhutdinov, 2006) They always looked like a really nice way to do non-linear dimensionality reduction: –But it is very difficult to optimize deep autoencoders using backpropagation. We now have a much better way to optimize them: –First train a stack of 4 RBM’s –Then “unroll” them. –Then fine-tune with backprop neurons 500 neurons 250 neurons neurons 28x28 linear units

A comparison of methods for compressing digit images to 30 real numbers. real data 30-D deep auto 30-D logistic PCA 30-D PCA

Retrieving documents that are similar to a query document We can use an autoencoder to find low- dimensional codes for documents that allow fast and accurate retrieval of similar documents from a large set. We start by converting each document into a “bag of words”. This a 2000 dimensional vector that contains the counts for each of the 2000 commonest words.

Proportion of retrieved documents in same class as query Number of documents retrieved

First compress all documents to 2 numbers using a type of PCA Then use different colors for different document categories

First compress all documents to 2 numbers. Then use different colors for different document categories