Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.

Slides:



Advertisements
Similar presentations
Remembering & Forgetting
Advertisements

Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Intellectual Development In Infants
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Chapter 7 Supervised Hebbian Learning.
Artificial Intelligence (CS 461D)
Neural Networks Basic concepts ArchitectureOperation.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
How does the mind process all the information it receives?
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter Seven The Network Approach: Mind as a Web.
Supervised Hebbian Learning. Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing.
Artificial neural networks.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #31 4/17/02 Neural Networks.
PART 5 Supervised Hebbian Learning. Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Supervised Hebbian Learning
Machine Learning. Learning agent Any other agent.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
NEURAL NETWORKS FOR DATA MINING
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Dialog Processing with Unsupervised Artificial Neural Networks Andrew Richardson Thomas Jefferson High School for Science and Technology Computer Systems.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
The Process of Forming Perceptions SHMD219. Perception The ability to see, hear, or become aware of something through the senses. Perception is a series.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Biological and cognitive plausibility in connectionist networks for language modelling Maja Anđel Department for German Studies University of Zagreb.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Neural Network Architecture Session 2
Dialog Processing with Unsupervised Artificial Neural Networks
Artificial Intelligence (CS 370D)
Real Neurons Cell structures Cell body Dendrites Axon
Simple learning in connectionist networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
OVERVIEW OF BIOLOGICAL NEURONS
Dialog Processing with Unsupervised Artificial Neural Networks
Simple learning in connectionist networks
The Network Approach: Mind as a Web
Introduction to Neural Network
Supervised Hebbian Learning
Models of the brain hardware
Presentation transcript:

Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015

Associationism and the Brain Aristotle counted four laws of association when he examined the processes of remembrance and recall: 1.The law of contiguity. Things or events that occur close to each other in space or time tend to get linked together 2.The law of frequency. The more often two things or events are linked, the more powerful that association. 3.The law of similarity. If two things are similar, the thought of one will tend to trigger the thought of the other 4.The law of contrast. Seeing or recalling something may also trigger the recollection of something opposite. ---Dr.C. George Boeree

Dawn of Connectionism David Hartley’s Observations on man (1749) We receive input through vibrations and those are transferred to the brain Memories could also be small vibrations (called vibratiuncles) in the same regions Our brain represents compound or connected ideas by connecting our memories with our current senses Current science did not know about neurons

Dawn of Connectionism Alexander Bain (The senses and the intellect (1855), The emotions and the will (1859), The mind and body (1873)) Knowing that the brain was composed of neurons, he tried to match what he know about memory with the structure of the brain (associationism + structure) Idea 1: The “nerve currents” from a memory of an event are the same but reduce from the “original shock” Idea 2: “for every act of memory, … there is a specific grouping, or co-ordination of sensations … by virtue of specific growths in cell junctions”

Bain’s Idea 1: Neural Groupings Neurons excite and stimulate each other They are flexible so the combinations of stimulations can produce different results

Bain’s Idea 1: Neural Groupings Different intensities of activation of A lead to the differences in when X and Y are activated

Bain’s Idea 2: Making Memories “when two impressions concur, or closely succeed one another, the nerve currents find some bridge or place of continuity, better or worse, according to the abundance of nerve matter available for the transition.”

Hebb on Neural Nets (1949) “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.“ Sounds a lot like Bain…

Bain’s Doubts In 1873, Bain postulated that there must be one million neurons and 5 billion connections relating to 200,000 “acquisitions” In 1883, Bain was concerned that he hadn’t taken into account the number of “partially formed associations” and the number of neurons responsible for recall/learning

Connectionism Definition: “a movement in cognitive science that hopes to explain intellectual abilities using artificial neural networks” Alternative: Classicism argues symbolic representations are encoded directly into memory

Training Neural Networks Hebbian Learning is a well known unsupervised technique that strengthens weights between pairs of nodes if the two nodes are often active at the same time Supervised Learning requires a training set to be shown in sequence to the net, and weights are adjusted to match the known desired output Still a “fine art”

Strengths of NNs Connectionist models seem particularly well matched to what we know about neurology Neural networks are well adapted for problems that require the resolution of many conflicting constraints in parallel. Connectionist models accommodate graded notions of category membership.

Weaknesses of NNs Connectionists usually do not attempt to explicitly model the variety of different kinds of brain neurons, nor the effects of neurotransmitters and hormones. It is far from clear that the brain contains the kind of reverse connections that would be needed if the brain were to learn by a process like backpropagation It is widely felt that neural networks are not good at the kind of rule based processing for language and reasoning.

Weaknesses of NNs Systematicity – understanding a concept without having learned the particular application of it Example: “John loves Mary” vs “Mary loves John” Fodor and McLaughlin show that it is possible to construct NNs that do recognize John’s love but not Mary’s, which is not what the human brain would do

Other Ideas Folk Psychology says people have plans, beliefs, and desires, but there does not seem to be a brain structure that supports these units Predictive Coding says that our brains constantly are comparing reality to memory and detecting surprises. Training NNs to detect surprise require some additions of backwards edges that correspond to differences from the “generic”

Summary Bain first discussed neurons and their ability to compute different values in networks Connectionism is the idea that memories are stored in computation vs a classicist digital memory and separate processing NNs have been successfully applied to many different problems, but there are still challenges to make them act like human brains

Questions and Comments