Phantom Limb Phenomena. Hand movement observation by individuals born without hands: phantom limb experience constrains visual limb perception. Funk M,

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Artificial Intelligence 12. Two Layer ANNs
Multi-Layer Perceptron (MLP)
Perceptron Lecture 4.
Slides from: Doug Gray, David Poole
Artificial Neural Networks (1)
NEURAL NETWORKS Perceptron
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
B.Macukow 1 Lecture 3 Neural Networks. B.Macukow 2 Principles to which the nervous system works.
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Simple Neural Nets For Pattern Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Nature requires Nurture Initial wiring is genetically controlled  Sperry Experiment But environmental input critical in early development  Occular dominance.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
Neural Network Computing Lecture no.1. All rights reserved L. Manevitz Lecture 12 McCullogh-Pitts Neuron The activation of a McCullogh-Pitts Neuron is.
Nature requires Nurture Initial wiring is genetically controlled  Sperry Experiment But environmental input critical in early development  Occular dominance.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Artificial neural networks:
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 9 Neural Network.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Fundamentals of Artificial Neural Networks Chapter 7 in amlbook.com.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
EEE502 Pattern Recognition
Perceptrons Michael J. Watts
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Today’s Lecture Neural networks Training
Neural networks.
Neural Networks.
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
Other Classification Models: Neural Network
Real Neurons Cell structures Cell body Dendrites Axon
slides created by Leon Barrett
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CS621: Artificial Intelligence
CSC 578 Neural Networks and Deep Learning
Classification Neural Networks 1
Chapter 3. Artificial Neural Networks - Introduction -
Neuro-Computing Lecture 4 Radial Basis Function Network
Artificial Intelligence 12. Two Layer ANNs
Introduction to Neural Network
The McCullough-Pitts Neuron
Active, dynamic, interactive, system
David Kauchak CS158 – Spring 2019
Prof. Pushpak Bhattacharyya, IIT Bombay

Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Phantom Limb Phenomena

Hand movement observation by individuals born without hands: phantom limb experience constrains visual limb perception. Funk M, Shiffrar M, Brugger P. Funk MShiffrar MBrugger P We investigated the visual experiences of two persons born without arms, one with and the other without phantom sensations. Normally-limbed observers perceived rate-dependent paths of apparent human movement. The individual with phantom experiences showed the same perceptual pattern as control participants, the other did not. Neural systems matching action observation, action execution and motor imagery are likely contribute to the definition of body schema in profound ways.

Summary Both genetic factors and activity dependent factors play a role in developing the brain architecture and circuitry.  There are critical developmental periods where nurture is essential, but there is also a great ability for the adult brain to regenerate. Next lecture: What computational models satisfy some of the biological constraints. Question: What is the relevance of neural development and learning in language and thought?

Connectionist Models: Basics Jerome Feldman CS182/CogSci110/Ling109 Spring 2007

Realistic Biophysical Neuron Simulations Not covered in any UCB class? Genesis and Neuron systems

Neural networks abstract from the details of real neurons Conductivity delays are neglected An output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1) Net input is calculated as the weighted sum of the input signals Net input is transformed into an output signal via a simple function (e.g., a threshold function)

The McCullough-Pitts Neuron y j : output from unit j W ij : weight on connection from j to i x i : weighted sum of input to unit i xixi f yjyj w ij yiyi x i = ∑ j w ij y j y i = f(x i –  i) t i : target Threshold

Mapping from neuron Nervous SystemComputational Abstraction NeuronNode DendritesInput link and propagation Cell BodyCombination function, threshold, activation function AxonOutput link Spike rateOutput Synaptic strengthConnection strength/weight

Simple Threshold Linear Unit

Simple Neuron Model 1

A Simple Example a = x 1 w 1 +x 2 w 2 +x 3 w x n w n a= 1*x *x2 +0.1*x3 x1 =0, x2 = 1, x3 =0 Net(input) = f = 0.5 Threshold bias = 1 Net(input) – threshold bias< 0 Output = 0.

Simple Neuron Model

Different Activation Functions Threshold Activation Function (step) Piecewise Linear Activation Function Sigmoid Activation Funtion Gaussian Activation Function  Radial Basis Function BIAS UNIT With X0 = 1

Types of Activation functions

The Sigmoid Function x=neti y=a

The Sigmoid Function x=neti y=a Output=0 Output=1

The Sigmoid Function x=neti y=a Output=0 Output=1 Sensitivity to input

Changing the exponent k(neti) K >1 K < 1

Nice Property of Sigmoids

Radial Basis Function

Stochastic units Replace the binary threshold units by binary stochastic units that make biased random decisions.  The “temperature” controls the amount of noise temperature

Types of Neuron parameters The form of the input function - e.g. linear, sigma-pi (multiplicative), cubic. The activation-output relation - linear, hard- limiter, or sigmoidal. The nature of the signals used to communicate between nodes - analog or boolean. The dynamics of the node - deterministic or stochastic.

Computing various functions McCollough-Pitts Neurons can compute logical functions.  AND, NOT, OR

Computing other functions: the OR function Assume a binary threshold activation function. What should you set w 01, w 02 and w 0b to be so that you can get the right answers for y 0 ? i1i1 i2i2 y0y x0x0 f i1i1 w 01 y0y0 i2i2 b=1 w 02 w 0b

Many answers would work y = f (w 01 i 1 + w 02 i 2 + w 0b b) recall the threshold function the separation happens when w 01 i 1 + w 02 i 2 + w 0b b = 0 move things around and you get i 2 = - (w 01/ w 02 )i 1 - (w 0b b/w 02 ) i2i2 i1i1

Decision Hyperplane The two classes are therefore separated by the `decision' line which is defined by putting the activation equal to the threshold. It turns out that it is possible to generalise this result to TLUs with n inputs. In 3-D the two classes are separated by a decision-plane. In n-D this becomes a decision-hyperplane.

Linearly separable patterns Linearly Separable Patterns PERCEPTRON is an architecture which can solve this type of decision boundary problem. An "on" response in the output node represents one class, and an "off" response represents the other.

The Perceptron

Input Pattern

The Perceptron Input Pattern Output Classification

A Pattern Classification

Pattern Space  The space in which the inputs reside is referred to as the pattern space. Each pattern determines a point in the space by using its component values as space- coordinates. In general, for n-inputs, the pattern space will be n-dimensional.  Clearly, for nD, the pattern space cannot be drawn or represented in physical space. This is not a problem: we shall return to the idea of using higher dimensional spaces later. However, the geometric insight obtained in 2-D will carry over (when expressed algebraically) into n- D.

The XOR Function X1/X2X2 = 0X2 = 1 X1= 001 X1 = 110

The Input Pattern Space

The Decision planes

Multi-layer Feed-forward Network

Pattern Separation and NN architecture

Conjunctive or Sigma-Pi nodes The previous spatial summation function supposes that each input contributes to the activation independently of the others. The contribution to the activation from input 1 say, is always a constant multiplier ( w1) times x1. Suppose however, that the contribution from input 1 depends also on input 2 and that, the larger input 2, the larger is input 1's contribution. The simplest way of modeling this is to include a term in the activation like w12(x1*x2) where w12>0 (for a inhibiting influence of input 2 we would, of course, have w12<0 ). w1*x1 + w2*x2 +w3*x3 + w12*(x1*x2) + w23(x2*x3) +w13*(x1*x3)

Sigma-Pi units

Sigma-Pi Unit

Biological Evidence for Sigma-Pi Units [axo-dendritic synapse] The stereotypical synapse consists of an electro-chemical connection between an axon and a dendrite - hence it is an axo-dendritic synapse [presynaptic inhibition] However there is a large variety of synaptic types and connection grouping. Of special importance are cases where the efficacy of the axo-dendritic synapse between axon 1 and the dendrite is modulated (inhibited) by the activity in axon 2 via the axo-axonic synapse between the two axons. This might therefore be modelled by a quadratic like w12(x1*x2) [synapse cluster] Here the effect of the individual synapses will surely not be independent and we should look to model this with a multilinear term in all the inputs.

Biological Evidence for Sigma-Pi units [axo-dendritic synapse] [presynaptic inhibition] [synapse cluster]

Link to Vision: The Necker Cube

physicslowest energy state chemistrymolecular minima biology fitness, MEU N euroeconomics vision threats, friends language errors, NTL Constrained Best Fit in Nature inanimate animate

Computing other relations The 2/3 node is a useful function that activates its outputs (3) if any (2) of its 3 inputs are active Such a node is also called a triangle node and will be useful for lots of representations.

Triangle nodes and McCullough-Pitts Neurons? BC A ABC

Representing concepts using triangle nodes triangle nodes: when two of the neurons fire, the third also fires

“They all rose” triangle nodes: when two of the neurons fire, the third also fires model of spreading activation

Basic Ideas behind the model Parallel activation streams. Top down and bottom up activation combine to determine the best matching structure. Triangle nodes bind features of objects to values Mutual inhibition and competition between structures Mental connections are active neural connections

5 levels of Neural Theory of Language Cognition and Language Computation Structured Connectionism Computational Neurobiology Biology MidtermQuiz Finals Neural Development Triangle Nodes Neural Net Spatial Relation Motor Control Metaphor SHRUTI Grammar abstraction

Can we formalize/model these intuitions What is a neurally plausible computational model of spreading activation that captures these features. What does semantics mean in neurally embodied terms  What are the neural substrates of concepts that underlie verbs, nouns, spatial predicates?