Towards an Implementation of a Theory of Visual Learning in the Brain Shamit Patel CMSC 601 May 2, 2011.

Slides:



Advertisements
Similar presentations
ARTIFICIAL INTELLIGENCE: THE MAIN IDEAS Nils J. Nilsson OLLI COURSE SCI 102 Tuesdays, 11:00 a.m. – 12:30 p.m. Winter Quarter, 2013 Higher Education Center,
Advertisements

Godfather to the Singularity
Hierarchical Temporal Memory (HTM)
Object recognition and scene “understanding”
Object Recognition with Features Inspired by Visual Cortex T. Serre, L. Wolf, T. Poggio Presented by Andrew C. Gallagher Jan. 25, 2007.
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
Fast Readout of Object Identity from Macaque Inferior Tempora Cortex Chou P. Hung, Gabriel Kreiman, Tomaso Poggio, James J.DiCarlo McGovern Institute for.
1 Lecture 35 Brief Introduction to Main AI Areas (cont’d) Overview  Lecture Objective: Present the General Ideas on the AI Branches Below  Introduction.
Machine Learning Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Human Visual System Neural Network Stanley Alphonso, Imran Afzal, Anand Phadake, Putta Reddy Shankar, and Charles Tappert.
EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
IT 691 Final Presentation Pace University Created by: Robert M Gust Mark Lee Samir Hessami Mark Lee Samir Hessami.
Yuki Osada Andrew Cannon 1.  Humans are an intelligent species. ◦ One feature is the ability to learn.  The ability to learn comes down to the brain.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Explorations in Neural Networks Tianhui Cai Period 3.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Artificial Intelligence Techniques Multilayer Perceptrons.
Hierarchical Temporal Memory as a Means for Image Recognition by Wesley Bruning CHEM/CSE 597D Final Project Presentation December 10, 2008.
A New Theory of Neocortex and Its Implications for Machine Intelligence TTI/Vanguard, All that Data February 9, 2005 Jeff Hawkins Director The Redwood.
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
Artificial Neural Network Building Using WEKA Software
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Artificial Intelligence & Neural Network
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Why Can't A Computer Be More Like A Brain?. Outline Introduction Turning Test HTM ◦ A. Theory ◦ B. Applications & Limits Conclusion.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
Activity Recognition Journal Club “Neural Mechanisms for the Recognition of Biological Movements” Martin Giese, Tomaso Poggio (Nature Neuroscience Review,
HIERARCHICAL TEMPORAL MEMORY WHY CANT COMPUTERS BE MORE LIKE THE BRAIN?
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Big data classification using neural network
Neural Network Architecture Session 2
The Relationship between Deep Learning and Brain Function
Deep Learning Amin Sobhani.
Learning in Neural Networks
Joost N. Kok Universiteit Leiden
Neural Networks Dr. Peter Phillips.
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Dr. Unnikrishnan P.C. Professor, EEE
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Overview of Machine Learning
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Sensorimotor Learning and the Development of Position Invariance
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Pattern Recognition & Machine Learning
August 8, 2006 Danny Budik, Itamar Elhanany Machine Intelligence Lab
Introduction to Neural Network
Presentation transcript:

Towards an Implementation of a Theory of Visual Learning in the Brain Shamit Patel CMSC 601 May 2, 2011

The Problem To develop a working theory of learning in the human neocortex and implement it in software Goal is for the learning algorithm to match or exceed human-level accuracy in visual pattern recognition and other hierarchical inference tasks

Hypothesis My hypothesis is that the brain learns through a feedback loop of sensing and reacting. I call this theory SensoReaction. The brain essentially learns through experience Feedback is the crucial ingredient of intelligence because it allows the brain to refine its predictions into the correct answer

Motivation Medical image processing Quality control Surveillance Ultimately, we would like to build machines that operate on the same neurocomputational mechanisms as the human brain

From Von Neumann Architecture to Neural Architecture of the Brain Image source: chitecture.svg Image source: /files/bluebrain-neuron.jpg

Related Work Numentas Hierarchical Temporal Memory (HTM) model Riesenhuber and Poggios HMAX model Fukushimas Neocognitron model

The Human Neocortex Image source:

Hierarchical Temporal Memory Image source:

Hierarchical Temporal Memory Directly based on the structure and computational properties of the human neocortex [1] Four main tasks of HTM: learning, inference, prediction, and behavior [1] Strength: Efficiency due to hierarchical structure [1] Weakness: Needs lots of training data

HMAX Image source:

HMAX Models the behavior of the ventral visual stream [2] Fundamental operations: (1) Weighted linear sum for aggregating simple features into complex ones, (2) Highly nonlinear MAX operation that computes output based on most active input [2] Strengths: Efficiency and invariance to position and size of input pattern [2] Weakness: Poor generalization to objects of different classes [2]

Neocognitron Image source:

Neocognitron Self-organized via unsupervised learning [3] S-cells are changeable and C-cells are invariant to position, shape, and size of input pattern [3] Strength: Unsupervised learning means we dont need labeled data Weakness: Poor generalization to objects of different classes

Approach 1)Implementation of HTM system 2)Integration of SensoReaction algorithm into the HTM system 3)Training the HTM system on temporal image data 4)Testing the HTM system on novel input patterns 5)Statistical analysis of results

Implementation of HTM system I have already implemented a considerable part of the HTM system, including the overall structure of the network and most of the training functionality Remaining work consists of implementing inference and integrating SensoReaction into the system

Integration of SensoReaction algorithm into HTM system SensoReaction is a feedback propagation mechanism that allows predictions to be propagated down the hierarchy for correction Algorithm will be integrated into the HTM system by first introducing feedback connections between every pair of successive layers in the network. Then, predictions will be passed down the hierarchy via these feedback connections.

Training the HTM system Present hundreds of streams of temporal image data to the input layer Allow the system to build its internal representations Training will consist of: (1) memorizing patterns, (2) building the Markov graphs, and (3) forming the temporal groups

Evaluation/Testing the HTM system Present thousands of noisy input patterns to the HTM network Observe the classification accuracy of the HTM system SensoReaction algorithm comes into play here by making predictions, passing them down the hierarchy, correcting them, and passing them back up

Statistical Analysis of Results Classification accuracy of HTM system with SensoReaction will be compared with classification accuracy of standard HTM system Two-sample t-test will be used to compare the classification accuracies of the two systems

Feasibility of Approach SensoReaction is feasible because it is essentially based on how the neocortex processes feedback Feedback can only improve the classification accuracy because prior experience is taken into account

Conclusion Feedback is the critical piece of intelligence Brain learns through constant sensing and reacting Ultimate goal is to build machines that work on the same computational principles as the brain

References [1] Numenta, Inc. (2010, December 10). Hierarchical Temporal Memory including HTM cortical learning algorithms (Version No. 0.2). Retrieved from overview/education/HTM CorticalLearningAlgorithms.pdf

References [2] Riesenhuber, M., & Poggio, T. (1999, November). Hierarchical models of object recognition in cortex. Nature America, 2(11), Retrieved from ps/nn99.pdf

References [3] Fukushima, K. (1980). Neocognitron: a self-organizing neural network model for a mechanism of pat- tern recognition unaffected by shift in position. Biological Cybernetics, 36, Retrieved from ip.info/other/books/neural/Neocognitron/19 80 Neocognitron%20A%20Self-organizing %20Neural%20Network%20Model%20for%2 0a%20Mechanism%20of%20Pattern%20Reco gnition%20 Unaffected%20by%20Shift%20in%20Position. pdf

Questions?