Summer 2011 Monday, 8/1. As you’re working on your paper Make sure to state your thesis and the structure of your argument in the very first paragraph.

Slides:



Advertisements
Similar presentations
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Artificial Neural Network
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
Introduction CS/CMPE 537 – Neural Networks. CS/CMPE Neural Networks (Sp 2004/2005) - Asim LUMS2 Biological Inspiration The brain is a highly.
Neural Networks Basic concepts ArchitectureOperation.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
COGNITIVE NEUROSCIENCE
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
Symbolic Encoding of Neural Networks using Communicating Automata with Applications to Verification of Neural Network Based Controllers* Li Su, Howard.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter Seven The Network Approach: Mind as a Web.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
Neural Networks II By Jinhwa Kim. 2 Neural Computing is a problem solving methodology that attempts to mimic how human brain function Artificial Neural.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
The Language of Thought : Part II Joe Lau Philosophy HKU.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Korea Maritime and Ocean University NLP Jung Tae LEE
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Fall 2004 Perceptron CS478 - Machine Learning.
Learning in Neural Networks
Artificial Intelligence (CS 370D)
Other Classification Models: Neural Network
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
OVERVIEW OF BIOLOGICAL NEURONS
Artificial Intelligence Lecture No. 28
The Network Approach: Mind as a Web
Introduction to Neural Network
Presentation transcript:

Summer 2011 Monday, 8/1

As you’re working on your paper Make sure to state your thesis and the structure of your argument in the very first paragraph. Help the reader (me!) by including signposts of where you are in the argument. Ask yourself what the point of each paragraph is and how it contributes to your argument. Give reasons for your claims! Don’t make unsupported assertions.

Neural Networks The brain can be thought of as a highly complex, non-linear and parallel computer whose structural constituents are neurons. There are billions of neurons in the brain. The computational properties of neurons are the reason why we’re interested in neurons more than in any other, non- neuronal cells in the brain.

Consider a simple recognition task, e.g. matching an image with a stored photograph. To perform the task, a computer must compare the image with thousands of stored photographs. At the end of all the comparisons, the computer may output the photograph that best matches the image. If the photograph database is as large as the one in our memory, this may take several hours. But our brain can do this instantly! Neural Networks

A silicon chip can perform a computation in nanoseconds (10 to the power of -9 seconds). But neuronal computations are done in miliseconds, which are 6 orders slower! Yet it seems that our computational capability (processing speed) is enormously greater than that of the typical computer. How is this possible? Neural Networks

The answer seems to lie in the massively parallel structure of the brain, which includes trillions of interconnections between neurons. Neural Networks

Artificial Neural Networks Inspired by the organization of the brain. Like the brain, are composed of many simple processors linked in parallel. In the brain, the simple processors are neurons and the connections are axons and synapses. In connectionist theory, the simple processing elements (much simpler than neurons) are called units and the connections are numerically weighted links between these units. Each unit takes inputs from a small group of neighbouring units and passes outputs to a small group of neighbors.

NETtalk An artificial neural network that can be trained to pronounce English words. Consists of about 300 units (neurons) arranged in three layers: an input layer, which reads the words, an output layer, which generates speech sounds, or phonemes, and a middle, ''hidden layer,'' which mediates between the other two. The units are joined to one another with 18,000 synapses, adjustable connections whose strengths can be turned up or down.

At first volume controls are set at random and NetTalk is a structureless, homogenized tabula rasa. Provided with a list of words, it babbles incomprehensibly. But some of its guesses are better than others, and they are reinforced by adjusting the strengths of the synapses according to a set of learning rules. After a half day of training, the pronunications become clearer and clearer until NetTalk can recognize some 1,000 words. In a week, it can learn 20,000. NETtalk

NetTalk is not provided with any rules for how different letters are pronounced under different circumstances. (It has been argued that ''ghiti'' could be pronounced ''fish'' - ''gh'' from ''enough'' and ''ti'' from ''nation.'') But once the system has evolved, it acts as though it knows the rules. They become implicitly coded in the network of connections, though no- one has any idea where the rules are located or what they look like. (On the surface, there’s just “numerical spaghetti”) NETtalk

Back-Propagation The network begins with a set of randomly selected connection weights. It is then exposed to a large number of input patterns. For each input pattern, some (initially incorrect) output is produced. An automatic supervisory system monitors the output, compares it to the target output, and calculates small adjustments to the connection weights. This is repeated until (often) the network solves the problem and yields the desired input-output profile.

Distributed Representation A connectionist system’s knowledge base does not consist in a body of declarative statements written out in a formal notation. Rather, it inheres in the set of connection weights and the unit architecture. The information active during the processing of a specific input may be equated with the transient activation patterns of the hidden units. An item of information has a distributed representation if it is expressed by the simultaneous activity of a number of units.

Superpositional Coding Partially overlapping use of distributed resources, where the overlap is informationally significant. For example, the activation pattern for a black panther may share some of the substructure of the activation pattern for a cat. The public language words “cat” and “panther” display no such overlap.

“Free” Generalizations A benefit of connectionist architecture. Generalizations occur because a new input pattern, if it resembles the old one in some aspects, yields a response that’s rooted in that partial overlap.

Graceful Degradation Another benefit of connectionist architecture. The ability of the system to produce sensible responses given some systematic damage. Such damage tolerance is possible in virtue of the use of distributed, superpositional storage schemes. This is similar to what goes on in our brains. Compare: Messing with wiring in a computer.

Sub-symbolic representation Physical symbol systems displayed semantic transparency: familiar words and ideas were rendered as simple inner symbols. Connectionist approaches introduce greater distance between daily talk and the contents manipulated by the computational system. The contentful elements in a subsymbolic program do not reflect our ways of thinking about the task domain. The structure that’s represented by a large pattern of unit activity may be too rich and subtle to be captured in everyday language.

Post-training Analysis How do we figure out what knowledge and strategies the network is actually using to solve the problems in its task domain? 1.Artificial lesions. 2.Statistical Analysis, e.g. PCA, cluster analysis.

Recurrent Neural Networks “Second generation” neural networks. Geared towards producing patterns that are extended in time (e.g. commands to produce a running motion) and to recognizing temporally extended patterns (e.g. facial motions). Includes a feedback-loop that “recycles” some aspects of the networks activity at time t1 along with the new inputs arriving at t2. The traces that are preserved act as short-term memory, enabling the network to generate new responses that depend both on current input and on the previous activity of the network.

Dynamical Connectionism “Third generation” connectionism. Puts even greater stress on dynamic and time involving properties. Introduces more neurobiologically realistic features, including special purpose units, more complex connectivity, computationally salient time delays in processing, deliberate use of noise, etc.