The Network Approach: Mind as a Web

Slides:



Advertisements
Similar presentations
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Advertisements

Decision Support Systems
Neural Networks Basic concepts ArchitectureOperation.
Knowing Semantic memory.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
COGNITIVE NEUROSCIENCE
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
Chapter Seven The Network Approach: Mind as a Web.
Artificial Neural Networks (ANNs)
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Machine Learning. Learning agent Any other agent.
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Parallel Artificial Neural Networks Ian Wesley-Smith Frameworks Division Center for Computation and Technology Louisiana State University
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 18 Connectionist Models
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Chapter 6 Neural Network.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
CHAPTER SEVEN The Network Approach: Mind as a Web.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Neural networks.
Neural Networks.
Fall 2004 Perceptron CS478 - Machine Learning.
Artificial neural networks
Learning in Neural Networks
Other Classification Models: Neural Network
Real Neurons Cell structures Cell body Dendrites Axon
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Chapter 3. Artificial Neural Networks - Introduction -
Artificial Intelligence Methods
XOR problem Input 2 Input 1
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Introduction to Neural Network
David Kauchak CS158 – Spring 2019

Presentation transcript:

The Network Approach: Mind as a Web Chapter Seven The Network Approach: Mind as a Web

Connectionism The major field of the network approach. Connectionists construct Artificial Neural Networks (ANNs), which are computer simulations of how groups of neurons might perform some task.

Information processing ANNs utilize a processing strategy in which large numbers of computing units perform their calculations simultaneously. This is known as parallel distributed processing. In contrast, traditional computers are serial processors, performing one computation at a time.

Serial and parallel processing architectures

Approaches The traditional approach in cognition and AI to solving problems is to use an algorithm in which every processing step is planned. It relies on symbols and operators applied to symbols. This is the knowledge-based approach. Connectionists instead let the ANN perform the computation on its own without any planning. They are concerned with the behavior of the network. This is the behavior-based approach.

Knowledge representation Information in an ANN exists as a collection of nodes and the connections between them. This is a distributed representation. Information in semantic networks, however, can be stored in a single node. This is a form of local representation.

Characteristics of ANNs A node is a basic computing unit. A link is the connection between one node and the next. Weights specify the strength of connections. A node fires if it receives activation above threshold.

Characteristics of ANNs A basis function determines the amount of stimulation a node receives. An activation function maps the strength of the inputs onto the node’s output. A sigmoidal activation function

Early neural networks Hebb (1949) describes two type of cell groupings. A cell assembly is a small group of neurons that repeatedly stimulate themselves. A phase sequence is a set of cell assemblies that activate each other.

Early neural networks Perceptrons were simple networks that could detect and recognize visual patterns. Early perceptrons had only two layers, an input and an output layer.

Modern ANNs More recent ANNs contain three layers, an input, hidden, and output layer. Input units activate hidden units, which then activate the output units.

Backpropagation learning in ANNs An ANN can learn to make a correct response to a particular stimulus input. The initial response is compared to a desired response represented by a teacher. The difference between the two, an error signal, is sent back to the network. This changes the weights so that the actual response is now closer to the desired.

Criteria of different ANNs Supervised networks have a teacher. Unsupervised networks do not. Networks can be either single-layer or multilayer. Information in a network can flow forward only, a feed-forward network, or it can flow back and forth between layers, a recurrent network.

Network typologies Hopfield-Tank networks. Supervised, single-layer, and laterally connected. Good at recovering “clean” versions of noisy patterns. Kohonen networks. An example of a two-layer, unsupervised network. Able to create topological maps of features present in the input. Adaptive Resonance Networks (ART). An unsupervised multilayer recurrent network that classifies input patterns.

Evaluating connectionism Advantages: Biological plausibility Graceful degradation Interference Generalization Disadvantages: No massive parallelism Convergent dynamic Stability-plasticity dilemma Catastrophic interference

Semantic networks Share some features in common with ANNs. Individual nodes represent meaningful concepts. Used to explain the organization and retrieval of information from LTM.

Characteristics of semantic networks Spreading activation. Activity spreads outward from nodes along links and activates other nodes. Retrieval cues. Nodes associated with others can activate them indirectly. Priming. Residual activation can facilitate responding.

A hierarchical semantic network Sentence verification tasks suggest a hierarchical organization of concepts in semantic memory (Collins and Quillian, 1969). Meaning for concepts such as animals may be arranged into superordinate, ordinate, and subordinate categories. Vertical distance in the network corresponds to category membership. Horizontal distance corresponds to property information.

Propositional networks Can represent propositional or sentence-like information. Example: “The man threw the ball.” Allow for more complex relationships between concepts such as agents, objects, and relations. Can also code for episodic knowledge.