Presentation is loading. Please wait.

Presentation is loading. Please wait.

September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc.

Similar presentations


Presentation on theme: "September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc."— Presentation transcript:

1 September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc Pomplun

2 September 7, 2010Neural Networks Lecture 1: Motivation & History 2 Instructor – Marc Pomplun Office: S-3-171 Lab: S-3-135 Office Hours: Tuesdays 14:30-16:00 Thursdays 19:00-20:30 Phone: 287-6443 (office) 287-6485 (lab) E-Mail: marc@cs.umb.edu

3 September 7, 2010Neural Networks Lecture 1: Motivation & History 3 The Visual Attention Lab Cognitive research, esp. eye movements

4 September 7, 2010Neural Networks Lecture 1: Motivation & History 4 Example: Distribution of Visual Attention

5 September 7, 2010Neural Networks Lecture 1: Motivation & History 5 Selectivity in Complex Scenes

6 September 7, 2010Neural Networks Lecture 1: Motivation & History 6 Selectivity in Complex Scenes

7 September 7, 2010Neural Networks Lecture 1: Motivation & History 7 Selectivity in Complex Scenes

8 September 7, 2010Neural Networks Lecture 1: Motivation & History 8 Selectivity in Complex Scenes

9 September 7, 2010Neural Networks Lecture 1: Motivation & History 9 Selectivity in Complex Scenes

10 September 7, 2010Neural Networks Lecture 1: Motivation & History 10 Selectivity in Complex Scenes

11 September 7, 2010Neural Networks Lecture 1: Motivation & History 11 Artificial Intelligence

12 September 7, 2010Neural Networks Lecture 1: Motivation & History 12 Modeling of Brain Functions

13 September 7, 2010Neural Networks Lecture 1: Motivation & History 13 Biologically Motivated Computer Vision:

14 September 7, 2010Neural Networks Lecture 1: Motivation & History 14 Human-Computer Interfaces:

15 September 7, 2010Neural Networks Lecture 1: Motivation & History 15 Grading  95%: A  90%: A-  74%: C+  70%: C  66%: C-  86%: B+  82%: B  78%: B-  62%: D+  56%: D  50%: D-  50%: F For the assignments, exams and your course grade, the following scheme will be used to convert percentages into letter grades:

16 September 7, 2010Neural Networks Lecture 1: Motivation & History 16 Complaints about Grading If you think that the grading of your assignment or exam was unfair, write down your complaint (handwriting is OK), write down your complaint (handwriting is OK), attach it to the assignment or exam, attach it to the assignment or exam, and give it to me or put it in my mailbox. and give it to me or put it in my mailbox. I will re-grade the whole exam/assignment and return it to you in class.

17 September 7, 2010Neural Networks Lecture 1: Motivation & History 17 Computers vs. Neural Networks “Standard” ComputersNeural Networks one CPUhighly parallel processing fast processing unitsslow processing units reliable unitsunreliable units static infrastructuredynamic infrastructure

18 September 7, 2010Neural Networks Lecture 1: Motivation & History 18 Why Artificial Neural Networks? There are two basic reasons why we are interested in building artificial neural networks (ANNs): Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing.

19 September 7, 2010Neural Networks Lecture 1: Motivation & History 19 Why Artificial Neural Networks? Why do we need another paradigm than symbolic AI for building “intelligent” machines? Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware.

20 September 7, 2010Neural Networks Lecture 1: Motivation & History 20 How do NNs and ANNs work? The “building blocks” of neural networks are the neurons.The “building blocks” of neural networks are the neurons. In technical systems, we also refer to them as units or nodes.In technical systems, we also refer to them as units or nodes. Basically, each neuronBasically, each neuron –receives input from many other neurons, –changes its internal state (activation) based on the current input, –sends one output signal to many other neurons, possibly including its input neurons (recurrent network)

21 September 7, 2010Neural Networks Lecture 1: Motivation & History 21 How do NNs and ANNs work? Information is transmitted as a series of electric impulses, so-called spikes.Information is transmitted as a series of electric impulses, so-called spikes. The frequency and phase of these spikes encodes the information.The frequency and phase of these spikes encodes the information. In biological systems, one neuron can be connected to as many as 10,000 other neurons.In biological systems, one neuron can be connected to as many as 10,000 other neurons. Usually, a neuron receives its information from other neurons in a confined area, its so-called receptive field.Usually, a neuron receives its information from other neurons in a confined area, its so-called receptive field.

22 September 7, 2010Neural Networks Lecture 1: Motivation & History 22 History of Artificial Neural Networks 1938Rashevsky describes neural activation dynamics by means of differential equations 1943McCulloch & Pitts propose the first mathematical model for biological neurons 1949Hebb proposes his learning rule: Repeated activation of one neuron by another strengthens their connection 1958Rosenblatt invents the perceptron by basically adding a learning algorithm to the McCulloch & Pitts model

23 September 7, 2010Neural Networks Lecture 1: Motivation & History 23 History of Artificial Neural Networks 1960Widrow & Hoff introduce the Adaline, a simple network trained through gradient descent 1961Rosenblatt proposes a scheme for training multilayer networks, but his algorithm is weak because of non-differentiable node functions 1962Hubel & Wiesel discover properties of visual cortex motivating self-organizing neural network models 1963Novikoff proves Perceptron Convergence Theorem

24 September 7, 2010Neural Networks Lecture 1: Motivation & History 24 History of Artificial Neural Networks 1964Taylor builds first winner-take-all neural circuit with inhibitions among output units 1969Minsky & Papert show that perceptrons are not computationally universal; interest in neural network research decreases 1982Hopfield develops his auto-association network 1982Kohonen proposes the self-organizing map 1985Ackley, Hinton & Sejnowski devise a stochastic network named Boltzmann machine

25 September 7, 2010Neural Networks Lecture 1: Motivation & History 25 History of Artificial Neural Networks 1986Rumelhart, Hinton & Williams provide the backpropagation algorithm in its modern form, triggering new interest in the field 1987Hecht-Nielsen develops the counterpropagation network 1988Carpenter & Grossberg propose the Adaptive Resonance Theory (ART) Since then, research on artificial neural networks has remained active, leading to numerous new network types and variants, as well as hybrid algorithms and hardware for neural information processing.


Download ppt "September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc."

Similar presentations


Ads by Google