Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons 1 Michael Arbib: CS564 - Brain Theory and.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Dendritic computation. Passive contributions to computation Active contributions to computation Dendrites as computational elements: Examples Dendritic.
A brief introduction to neuronal dynamics Gemma Huguet Universitat Politècnica de Catalunya In Collaboration with David Terman Mathematical Bioscience.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
B.Macukow 1 Neural Networks Lecture 2. B.Macukow 2 Biological and Neurological Background Human Nervous system.
B.Macukow 1 Lecture 3 Neural Networks. B.Macukow 2 Principles to which the nervous system works.
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
(So you don’t have to watch me draw a lot of bad pictures!)
1 Introduction to Bio-Inspired Models During the last three decades, several efficient machine learning tools have been inspired in biology and nature:
Real Neurons for Engineers (Lecture 2) Harry R. Erwin, PhD COMM2E University of Sunderland.
Effects of Excitatory and Inhibitory Potentials on Action Potentials Amelia Lindgren.
Neural Condition: Synaptic Transmission
CSE 153Modeling Neurons Chapter 2: Neurons A “typical” neuron… How does such a thing support cognition???
How does the mind process all the information it receives?
CS 561, Session 28 1 Artificial Neural Networks and AI Artificial Neural Networks provide… -A new computing paradigm -A technique for developing trainable.
Itti: CS564 - Brain Theory and Artificial Intelligence. Systems Concepts 1 CS564 - Brain Theory and Artificial Intelligence University of Southern California.
Computational Biology, Part 20 Neuronal Modeling Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Mind, Brain & Behavior Wednesday January 15, 2003.
THE ROLE OF NEURONS IN PERCEPTION Basic Question How can the messages sent by neurons represent objects in the environment?
Neuroscience and Behavior Most information in this presentation is taken directly from UCCP content, unless otherwise noted.
Taken from: Hodgkin and Huxley Taken from:
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
1 Computational Vision CSCI 363, Fall 2012 Lecture 3 Neurons Central Visual Pathways See Reading Assignment on "Assignments page"
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Neuroprosthetics Week 4 Neuron Modelling. Implants excite neurons Retina – ganglion or bipolar cells Retina – ganglion or bipolar cells Cochlea/Ear –
Bursting Neurons (Lecture 10) Harry R. Erwin, PhD COMM2E University of Sunderland.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 19. Systems Concepts 1 Michael Arbib: CS564 - Brain Theory and.
Artificial Intelligence & Neural Network
Version 0.10 (c) 2007 CELEST VISI  N BRIGHTNESS CONTRAST: ADVANCED MODELING CLASSROOM PRESENTATION.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
How neurons communicate ACTION POTENTIALS Researchers have used the axons of squids to study action potentials The axons are large (~1mm) and extend the.
Biology 211 Anatomy & Physiology I Dr. Thompson Electrophysiology.
An Introduction to Neurotransmission William Wisden Dept of Clinical Neurobiology INF 364
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
CS 621 Artificial Intelligence Lecture /11/05 Guest Lecture by Prof
Properties of Synapse Dr Ghulam Mustafa Learning objective’s Discuss the properties of synapse Or Describe the factors affecting synaptic transmission.
Passive Cable Theory Methods in Computational Neuroscience 1.
Hodgkin-Huxley Model and FitzHugh-Nagumo Model. Nervous System Signals are propagated from nerve cell to nerve cell (neuron) via electro-chemical mechanisms.
MODEL OF WHOLE NEURON. This section brings together the entire neuron, combing the dendrite, soma, axon, and presynaptic terminal.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
THE NERVE IMPULSE. Cells and membrane potentials All animal cells generate a small voltage across their membranes This is because there is a large amount.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
University of Jordan1 Physiology of Synapses in the CNS- L4 Faisal I. Mohammed, MD, PhD.
Synaptic Plasticity Synaptic efficacy (strength) is changing with time. Many of these changes are activity-dependent, i.e. the magnitude and direction.
LECTURE 8: SYNAPTIC TRANSMISSION OVERVIEW AND NMJ REQUIRED READING: Kandel text, Chapters 10,11 In humans, neurons, each receiving average of
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
1 How and why neurons fire. 2 “The Neuron “ Contents:  Structure  Electrical Membrane Properties  Ion Channels  Actionpotential  Signal Propagation.
Spiking Neuron Networks
Artificial Intelligence (CS 370D)
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Resting Potential, Ionic Concentrations, and Channels
Neuroprosthetics Week 4 Neuron Modelling.
Effects of Excitatory and Inhibitory Potentials on Action Potentials
Artificial Intelligence Lecture No. 28
THE PHYSIOLOGICAL APPROACH
Synaptic integration.
Today you will: Define threshold and use it to explain the all-or-none response Describe a synapse Explain how chemical transmitters aid in transmission.
Introduction to Neural Network
Synaptic Transmission and Integration
Neural Condition: Synaptic Transmission
Presentation transcript:

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 1 Michael Arbib: CS564 - Brain Theory and Artificial Intelligence Lecture 3 The Brain as a Network of Neurons Reading Assignments:* TMB2: Section 2.3 HBTNN: Single Cell Models (Softky and Koch) Axonal Modeling (Koch and Bernander) Perspective on Neuron Model Complexity (Rall) * Unless indicated otherwise, the TMB2 material is the required reading, and the other readings supplementary.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 2 The "basic" biological neuron The soma and dendrites act as the input surface; the axon carries the outputs. The tips of the branches of the axon form synapses upon other neurons or upon effectors (though synapses may occur along the branches of an axon as well as the ends). The arrows indicate the direction of "typical" information flow from inputs to outputs.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 3 From Passive to Active Propagation For "short" cells passive propagation suffices to signal a potential change from one end to the other; If the axon is long, this is inadequate since changes at one end would decay away almost completely before reaching the other end. If the change in potential difference is large enough, then in a cylindrical configuration such as the axon, a pulse can actively propagate at full amplitude. The Hodgkin-Huxley Equations (1952)

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 4 Excitatory and Inhibitory Synapses Dale's law states that each neuron releases a single transmitter substance. (A “first approximation”) This does not mean that the synapses made by a single neuron are either all excitatory or all inhibitory. Modern understanding: Channels which "open" and "close"provide the mechanisms for the Hodgkin-Huxley equation, and this notion of channels extends to synaptic transmission. The action of a synapse depends on both transmitter released presynaptically, and specialized receptors in the postsynaptic membrane. Moreover, neurons may secrete transmitters which act as neuromodulators of the function of a circuit on some quite extended time scale (cf. TMB2 Sections 6.1 and 8.1).

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 5 A Synapse

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 6 Down the Levels of Detail to Neuron and Synapse A small piece of cerebellum A Purkinje cell Just a few of the synapses of parallel fibres on a branch of the dendritic tree

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 7 The Neurochemistry of Synaptic Plasticity: The Grounding of Learning and Memory Eric Kandel’s view of how molecular changes in a synapse may produce "short term memory" and "long term memory" in Aplysia. [cf. TMB2 Section 8.1] The Nobel Prize in Physiology or Medicine for 2000 was awarded jointly to Arvid Carlsson, Paul Greengard and Eric Kandel

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 8 Approximations are crucial neurons with 10 4 or more synapses, each of which is a complex chemical machine A general challenge:  Using computer and mathematical analysis to validate simplifications which allow us to find the right projection of the complexity of units at one level of analysis to provide components for large scale models at the next level of analysis.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 9 Multi-Level Modeling and Data Analysis McCulloch-Pitts Neural Networks (Logic and computability theory) Hodgkin-Huxley model of the action potential (spike propagation) (Nonlinear differential equations)

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 10 From Mind to Neural Networks Warren McCulloch “What is a man that he may know a number, and a number that a man may know it?” A philosophy-driven approach: Inspired by Kant and Leibnitz, seeking to map logic onto neural function Mathematics: Mathematical logic (propositional logic; Turing’s theory of computability)

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 11 A McCulloch-Pitts neuron operates on a discrete time-scale, t = 0,1,2,3,... with time tick equal to one refractory period At each time step, an input or output is on or off — 1 or 0, respectively. Each connection or synapse from the output of one neuron to the input of another, has an attached weight. Warren McCulloch and Walter Pitts (1943)

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 12 Excitatory and Inhibitory Synapses We call a synapse excitatory if w i > 0, and inhibitory if w i < 0. We also associate a threshold  with each neuron A neuron fires (i.e., has value 1 on its output line) at time t+1 if the weighted sum of inputs at t reaches or passes  : y(t+1) = 1 if and only if  w i x i (t)  .

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 13 From Logical Neurons to Finite Automata AND NOT 0 OR Brains, Machines, and Mathematics, 2nd Edition, 1987 X Y  Boolean Net X Y Q Finite Automaton

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 14 Philosophically and Technologically Important A major attack on dualism  The “brain” of a Turing machine A good global view of the input-output computational capacity of neural networks An important basis for the technology of artificial neural networks  with the addition of learning rules But not a neuron-by-neuron account of the brain’s functions:  Logic is a culturally late activity of large neural populations, not a direct expression of neural function.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 15 Increasing the Realism of Neuron Models The McCulloch-Pitts neuron of 1943 is important as a basis for:  logical analysis of the neurally computable, and  current design of some neural devices (especially when augmented by learning rules to adjust synaptic weights). However, it is no longer considered a useful model for making contact with neurophysiological data concerning real neurons.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 16 Alan Hodgkin and Andrew Huxley “What are the dynamics of the axon potential (spike propagation along the axon?)” Mathematics: Ordinary differential equations A data-driven approach:  Giant squid axon   massive data   curve fitting   differential equations that elegantly describe these curves Then later mathematics and computer analysis explore the far-ranging implications of the Hodgkin-Huxley equations Hodgkin & Huxley Photos from

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 17 From Curve Fitting to Nonlinear Analysis The Hodgkin-Huxley Equations: 4 coupled differential equations C dV/dt = - g L (V - V L ) + I a - g Na m 3 h(V - V Na ) - g K n 4 (V - V K ) L: Leakage Na: Sodium K: Potassium where dm/dt =  m (V) (1 - m) -  m (V) m dn/dt =  n (V) (1 - n) -  n (V) n dh/dt =  h (V) (1 - h) -  h (V) h with the  s and  s found by curve-fitting and the forms m 3 h and n 4 add a deep intuitive understanding that anticipates the structure of channels. Many properties of these 4-variable Hodgkin-Huxley equations can be explored analytically or qualitatively by two-dimensional differential equations.  spontaneous spiking  spiking which follows the input stimulus  bursting

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 19 From Hodgkin-Huxley to the Neuron Compartmental Modeling  Hundreds of compartments per neuron Dendrites, soma and cell bodies Synapses versus  A very few compartments that capture neural essentials Traub and Miles on neurons in the hippocampus Mainen and Sejnowski on pyramidal cells of cortex.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 20 Leaky Integrator Neuron The simplest "realistic" neuron model is a single-compartment continuous time model based on the firing rate (e.g., the number of spikes traversing the axon in the most recent 20 msec.) as a continuously varying measure of the cell's activity The state of the neuron is described by a single variable, the membrane potential. The firing rate is approximated by a sigmoid, function of membrane potential.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 21 Leaky Integrator Model  = - m(t) + h has solution m(t) = e -t/  m(0) + (1 - e -t/  )h  h for time constant  > 0. We now add synaptic inputs to get the Leaky Integrator Model:  = - m(t) +  i w i X i (t) + h where X i (t) is the firing rate at the i th input. Excitatory input (w i > 0) will increase Inhibitory input (w i < 0) will have the opposite effect. m(t)

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 22 Rall’s Motion Detector Model

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 23 Alternative Models Even at this simple level, there are alternative models. There are inhibitory synapses which seem better described by shunting inhibition which, applied at a given point on a dendrite, serves to divide, rather than subtract from, the potential change passively propagating from more distal synapses. The "lumped frequency" model cannot model the subtle relative timing effects crucial to our motion detector example — these might be approximated by introducing appropriate delay terms  m(t)  = - m(t) +  i w i x i (t -  i ) + h.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall Lecture 3 Networks of Neurons 24 No modeling approach is automatically appropriate Rather we seek to find the simplest model adequate to address the complexity of a given range of problems. The Neural Simulation Language NSLJ provides tools for modeling complex neural systems - especially (but not only) when the neurons are modeled as leaky integrator neurons.