September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Advertisements

Machine Learning Neural Networks.
September 2, 2014Computer Vision Lecture 1: Human Vision 1 Welcome to CS 675 – Computer Vision Fall 2014 Instructor: Marc Pomplun Instructor: Marc Pomplun.
Artificial Neural Network
September 8, 2009Introduction to Cognitive Science Lecture 1: Introduction and Preliminaries 1 Happy New Semester and Welcome to Intro to Cognitive Science!
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Ahmad Aljebaly Artificial Neural Networks. Agenda History of Artificial Neural Networks What is an Artificial Neural Networks? How it works? Learning.
COGNITIVE NEUROSCIENCE
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
Chapter Seven The Network Approach: Mind as a Web.
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Neural Networks An Introduction.
Artificial Neural Networks KONG DA, XUEYU LEI & PAUL MCKAY.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
NEURAL NETWORKS Introduction
Artificial Neural Network
January 28, 2014CS410 – Software Engineering Lecture #1: Introduction 1 Welcome to CS 410 – Introduction to Software Engineering Spring 2014 Instructor:
November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses.
Welcome to CS/MATH 320L – Applied Discrete Mathematics Spring 2015
CS-485: Capstone in Computer Science Artificial Neural Networks and their application in Intelligent Image Processing Spring
Brief History of AI Augusta Ada ( ) work on Charles Babbage’s early mechanical general-purpose computer, the Analytical Engine. Her.
Artificial Neural Networks An Overview and Analysis.
2101INT – Principles of Intelligent Systems Lecture 10.
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Introduction to Machine Learning Instructor Shie-Jue Lee ( 李錫智 )
September 8, 2009Theory of Computation Lecture 1: Introduction and Preliminaries 1 Welcome to CS 620 – Theory of Computation Fall 2009 Instructor: Marc.
Feed-Forward Neural Networks 主講人 : 虞台文. Content Introduction Single-Layer Perceptron Networks Learning Rules for Single-Layer Perceptron Networks – Perceptron.
ICS 586: Neural Networks Dr. Lahouari Ghouti Information & Computer Science Department.
September 3, 2013Computer Vision Lecture 1: Human Vision 1 Welcome to CS 675 – Computer Vision Fall 2013 Instructor: Marc Pomplun Instructor: Marc Pomplun.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Lecture 5 Neural Control
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Introduction to Neural Networks Jianfeng Feng School of Cognitive and Computing Sciences Spring 2001.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Artificial Neural Networks By: Steve Kidos. Outline Artificial Neural Networks: An Introduction Frank Rosenblatt’s Perceptron Multi-layer Perceptron Dot.
September 6, 2016CS410 – Software Engineering Lecture #1: Introduction 1 Welcome to CS 410 – Introduction to Software Engineering Fall 2016 Instructor:
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Fall 2004 Backpropagation CS478 - Machine Learning.
Multilayer Perceptrons
Fall 2004 Perceptron CS478 - Machine Learning.
Recognition of biological cells – the beginning of study
Artificial Intelligence (CS 370D)
Machine Learning Neural Networks.
شبكه هاي عصبي مصنوعي جلسه دوم تاريخچه شبكه هاي عصبي مصنوعي
Joost N. Kok Universiteit Leiden
Welcome to CS 675 – Computer Vision Spring 2018
Artificial Intelligence CSC 361
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Welcome to CS220/MATH 320 – Applied Discrete Mathematics Fall 2018
Artificial Neural Networks
Artificial Neural Networks
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Welcome to CS 410 – Introduction to Software Engineering Spring 2019
Lecture 02: Perceptron By: Nur Uddin, Ph.D.
Welcome to CS 620 – Theory of Computation Spring 2019
The Network Approach: Mind as a Web
David Kauchak CS158 – Spring 2019

Presentation transcript:

September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc Pomplun

September 7, 2010Neural Networks Lecture 1: Motivation & History 2 Instructor – Marc Pomplun Office: S Lab: S Office Hours: Tuesdays 14:30-16:00 Thursdays 19:00-20:30 Phone: (office) (lab)

September 7, 2010Neural Networks Lecture 1: Motivation & History 3 The Visual Attention Lab Cognitive research, esp. eye movements

September 7, 2010Neural Networks Lecture 1: Motivation & History 4 Example: Distribution of Visual Attention

September 7, 2010Neural Networks Lecture 1: Motivation & History 5 Selectivity in Complex Scenes

September 7, 2010Neural Networks Lecture 1: Motivation & History 6 Selectivity in Complex Scenes

September 7, 2010Neural Networks Lecture 1: Motivation & History 7 Selectivity in Complex Scenes

September 7, 2010Neural Networks Lecture 1: Motivation & History 8 Selectivity in Complex Scenes

September 7, 2010Neural Networks Lecture 1: Motivation & History 9 Selectivity in Complex Scenes

September 7, 2010Neural Networks Lecture 1: Motivation & History 10 Selectivity in Complex Scenes

September 7, 2010Neural Networks Lecture 1: Motivation & History 11 Artificial Intelligence

September 7, 2010Neural Networks Lecture 1: Motivation & History 12 Modeling of Brain Functions

September 7, 2010Neural Networks Lecture 1: Motivation & History 13 Biologically Motivated Computer Vision:

September 7, 2010Neural Networks Lecture 1: Motivation & History 14 Human-Computer Interfaces:

September 7, 2010Neural Networks Lecture 1: Motivation & History 15 Grading  95%: A  90%: A-  74%: C+  70%: C  66%: C-  86%: B+  82%: B  78%: B-  62%: D+  56%: D  50%: D-  50%: F For the assignments, exams and your course grade, the following scheme will be used to convert percentages into letter grades:

September 7, 2010Neural Networks Lecture 1: Motivation & History 16 Complaints about Grading If you think that the grading of your assignment or exam was unfair, write down your complaint (handwriting is OK), write down your complaint (handwriting is OK), attach it to the assignment or exam, attach it to the assignment or exam, and give it to me or put it in my mailbox. and give it to me or put it in my mailbox. I will re-grade the whole exam/assignment and return it to you in class.

September 7, 2010Neural Networks Lecture 1: Motivation & History 17 Computers vs. Neural Networks “Standard” ComputersNeural Networks one CPUhighly parallel processing fast processing unitsslow processing units reliable unitsunreliable units static infrastructuredynamic infrastructure

September 7, 2010Neural Networks Lecture 1: Motivation & History 18 Why Artificial Neural Networks? There are two basic reasons why we are interested in building artificial neural networks (ANNs): Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing.

September 7, 2010Neural Networks Lecture 1: Motivation & History 19 Why Artificial Neural Networks? Why do we need another paradigm than symbolic AI for building “intelligent” machines? Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware.

September 7, 2010Neural Networks Lecture 1: Motivation & History 20 How do NNs and ANNs work? The “building blocks” of neural networks are the neurons.The “building blocks” of neural networks are the neurons. In technical systems, we also refer to them as units or nodes.In technical systems, we also refer to them as units or nodes. Basically, each neuronBasically, each neuron –receives input from many other neurons, –changes its internal state (activation) based on the current input, –sends one output signal to many other neurons, possibly including its input neurons (recurrent network)

September 7, 2010Neural Networks Lecture 1: Motivation & History 21 How do NNs and ANNs work? Information is transmitted as a series of electric impulses, so-called spikes.Information is transmitted as a series of electric impulses, so-called spikes. The frequency and phase of these spikes encodes the information.The frequency and phase of these spikes encodes the information. In biological systems, one neuron can be connected to as many as 10,000 other neurons.In biological systems, one neuron can be connected to as many as 10,000 other neurons. Usually, a neuron receives its information from other neurons in a confined area, its so-called receptive field.Usually, a neuron receives its information from other neurons in a confined area, its so-called receptive field.

September 7, 2010Neural Networks Lecture 1: Motivation & History 22 History of Artificial Neural Networks 1938Rashevsky describes neural activation dynamics by means of differential equations 1943McCulloch & Pitts propose the first mathematical model for biological neurons 1949Hebb proposes his learning rule: Repeated activation of one neuron by another strengthens their connection 1958Rosenblatt invents the perceptron by basically adding a learning algorithm to the McCulloch & Pitts model

September 7, 2010Neural Networks Lecture 1: Motivation & History 23 History of Artificial Neural Networks 1960Widrow & Hoff introduce the Adaline, a simple network trained through gradient descent 1961Rosenblatt proposes a scheme for training multilayer networks, but his algorithm is weak because of non-differentiable node functions 1962Hubel & Wiesel discover properties of visual cortex motivating self-organizing neural network models 1963Novikoff proves Perceptron Convergence Theorem

September 7, 2010Neural Networks Lecture 1: Motivation & History 24 History of Artificial Neural Networks 1964Taylor builds first winner-take-all neural circuit with inhibitions among output units 1969Minsky & Papert show that perceptrons are not computationally universal; interest in neural network research decreases 1982Hopfield develops his auto-association network 1982Kohonen proposes the self-organizing map 1985Ackley, Hinton & Sejnowski devise a stochastic network named Boltzmann machine

September 7, 2010Neural Networks Lecture 1: Motivation & History 25 History of Artificial Neural Networks 1986Rumelhart, Hinton & Williams provide the backpropagation algorithm in its modern form, triggering new interest in the field 1987Hecht-Nielsen develops the counterpropagation network 1988Carpenter & Grossberg propose the Adaptive Resonance Theory (ART) Since then, research on artificial neural networks has remained active, leading to numerous new network types and variants, as well as hybrid algorithms and hardware for neural information processing.