COMPUTATIONAL COGNITIVE SCIENCE. Cognitive Revolution Development of computer led to rise of cognitive psychology and artificial intelligence BINAC: the.

Slides:



Advertisements
Similar presentations
Artificial Neural Network
Advertisements

Tools: Computers and IT. VB, VBA, Excel, InterDev, Etc. Humans: Decision Making Process Algorithms: Math/Flow Chart stuff that helps the tools help the.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Neural Networks Basic concepts ArchitectureOperation.
What is Cognitive Science? … is the interdisciplinary study of mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience,
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
COGNITIVE NEUROSCIENCE
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
COMPUTATIONAL COGNITIVE SCIENCE. Cognitive Revolution Development of computer led to rise of cognitive psychology and artificial intelligence BINAC: the.
How does the mind process all the information it receives?
Chapter Seven The Network Approach: Mind as a Web.
Overview and History of Cognitive Science. How do minds work? What would an answer to this question look like? What is a mind? What is intelligence? How.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Data Mining with Neural Networks (HK: Chapter 7.5)
Neural Networks Primer Dr Bernie Domanski The City University of New York / CSI 2800 Victory Blvd 1N-215 Staten Island, New York 10314
LOGO Classification III Lecturer: Dr. Bo Yuan
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Supervised Learning: Perceptrons and Backpropagation.
Artificial Intelligence
Rohit Ray ESE 251. What are Artificial Neural Networks? ANN are inspired by models of the biological nervous systems such as the brain Novel structure.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
Connectionist Models of Language Development: Grammar and the Lexicon Steve R. Howell McMaster University, 1999.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
CSE & CSE6002E - Soft Computing Winter Semester, 2011 Neural Networks Videos Brief Review The Next Generation Neural Networks - Geoff Hinton.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Chapter 18 Connectionist Models
The Emergent Structure of Semantic Knowledge
Dialog Processing with Unsupervised Artificial Neural Networks Andrew Richardson Thomas Jefferson High School for Science and Technology Computer Systems.
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
CAP6938 Neuroevolution and Artificial Embryogeny Neural Network Weight Optimization Dr. Kenneth Stanley January 18, 2006.
Chapter 6 Neural Network.
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
The Language of Thought : Part II Joe Lau Philosophy HKU.
Emergent Semantics: Meaning and Metaphor Jay McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University.
Chapter 13 Artificial Intelligence. Artificial Intelligence – Figure 13.1 The Turing Test.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Big data classification using neural network
Outline Of Today’s Discussion
What is cognitive psychology?
Neural Network Architecture Session 2
Fall 2004 Perceptron CS478 - Machine Learning.
PART IV: The Potential of Algorithmic Machines.
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Artificial Intelligence
OVERVIEW OF BIOLOGICAL NEURONS
of the Artificial Neural Networks.
The Network Approach: Mind as a Web
Introduction to Neural Network
Active, dynamic, interactive, system
Presentation transcript:

COMPUTATIONAL COGNITIVE SCIENCE

Cognitive Revolution Development of computer led to rise of cognitive psychology and artificial intelligence BINAC: the Binary Automatic Computer, developed in 1949

Artificial Intelligence Constructing artificial computer-based systems that produce intelligent outcomes Examples –Game playing programs Deep blue –Intelligent robots Mars rovers Darpa’s urban challenge –Netflix competitionNetflix competition –Conversational agentsConversational agents

Weak vs. Strong AI Weak AI — using AI as a tool to understand human cognition Strong AI — a properly programmed computer has a “mind” capable of understanding

Turing Test Can artificial intelligence be as good as human intelligence? How can we test this? Turing test (1950) –designed to test whether humans can distinguish between humans and computers based on conversations –A human interrogator could ask a respondent (either a computer or a human, whose identity was hidden) any question he or she wished, and based on either the computer’s or the human’s response, the interrogator had to decide if the answer was given by the computer or by the human. Alan Turing ( )

A classic conversational agent The Turing Test inspired an early, satirical, attempt to create a computerized Rogerian therapist, “Eliza”:computerized Rogerian therapist, “Eliza”: Eliza’s program is a set of “If…Then” rules: – “IF person mentions ‘father’ THEN say ‘Who else in your family comes to mind when you think about this?’ – “IF person writes ‘I remember X’ THEN say ‘Does it make you feel happy to recall X?’ ELIZA has no intelligence itself. Intelligence comes from people interpreting its statements.

Example of modern conversational agents [following is a fragment where A=your instructor; B = very good artificial agent] A: are you human? B: What do you think? I am a Supreme Ruler. A: seriously B: I am always serious. A: who was Turing? B: A brilliant and tragic figure in the history of computer science. A: Do you think the Turing test is a good one? B: Some of them can. A: are you having a good day? B: I am having fun. A: How do you know you are having fun? B: By introspection.

Future of AI Computer chip capacity and processing speed are increasing exponentially Some theorists (e.g. Ray Kurzweil) believe this will lead to a technological singularity along with dramatic improvements in AI

Computational Modeling Most modeling in cognitive science targets natural intelligence Goal is to develop model or mimic some aspects of human cognitive functioning –produce the same errors as humans  Simulations of aspects of human behaviour

Why do we need computational models? Makes vague verbal terms specific –Provides precision needed to specify complex theories. Provides explanations Obtain quantitative predictions –just as meteorologists use computer models to predict tomorrow’s weather, the goal of modeling human behavior is to predict performance in novel settings

Neural Networks

Alternative to traditional information processing models Also known as: –PDP (parallel distributed processing approach) –Connectionist models David RumelhartJay McClelland

Neural Networks Neural networks are networks of simple processors that operate simultaneously Some biological plausibility

 Idealized neurons (units) Output Processor Inputs Abstract, simplified description of a neuron

Neural Networks Units – Activation = Activity of unit – Weight = Strength of the connection between two units Learning = changing strength of connections between units Excitatory and inhibitory connections –correspond to positive and negative weights respectively

An example calculation for a single (artificial) neuron Diagram showing how the inputs from a number of units are combined to determine the overall input to unit-i. Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with  1, but if the net input is less than 1 then it will respond with –1 final output

What would happen if we change the input J3 from +1 to -1? a)output changes to -1 b)output stays at +1 c)do not know What would happen if we change the input J4 from +1 to -1? a)output changes to -1 b)output stays at +1 c)do not know final output

If we want a positive correlation between the output and input J3, how should we change the weight for J3? a)make it negative b)make it positive c)do not know final output

Multi-layered Networks Activation flows from a layer of input units through a set of hidden units to output units Weights determine how input patterns are mapped to output patterns hidden units input units output units

Multi-layered Networks Network can learn to associate output patterns with input patterns by adjusting weights Hidden units tend to develop internal representations of the input-output associations Backpropagation is a common weight-adjustment algorithm hidden units input units output units

A classic neural network: NETtalk 7 groups of 29 input units 26 output units 80 hidden units _a_cat_ 7 letters of text input (after Hinton, 1989) target letter teacher /k/ target output network learns to pronounce English words: i.e., learns spelling to sound relationships. Listen to this audio demo.Listen to this audio demo.

Other Demos & Tools If you are interested, here is a tool to create your own neural network and train it on data:here Hopfield network Backpropagation algorithm and competitive learning: Competitive learning: Various networks: Optical character recognition: Brain-wave simulator

Recent Neural Network Research (since 2006) “Deep neural networks” by Geoff Hinton –Demos of learning digitsDemos of learning digits –Demos of learning facesDemos of learning faces –Demos of learned movementsDemos of learned movements What is new about these networks? –they can stack many hidden layers –can capture more regularities in data and generalize better –activity can flow from input to output and vice-versa Geoff Hinton In case you want to see more details: YouTube videoYouTube video

Different ways to represent information with neural networks: localist representation concept 1 concept 2 concept 3 Each unit represents just one item  “grandmother” cells Unit 1 Unit 2 Unit 3 Unit 4 Unit 5 (activations of units; 0=off 1=on) Unit 6

Distributed Representations (aka Coarse Coding) concept 1 concept 2 concept (activations of units; 0=off 1=on) Each unit is involved in the representation of multiple items Unit 1 Unit 2 Unit 3 Unit 4 Unit 5 Unit 6

Suppose we lost unit 6 concept 1 concept 2 concept (activations of units; 0=off 1=on) Can the three concepts still be discriminated? a)NO b)YES c)do not know Unit 1 Unit 2 Unit 3 Unit 4 Unit 5 Unit 6

Unit 1 Unit 2 Unit 3 Unit 4 Unit 1 Unit 2 Unit 3 Unit 4 W1000W1001 X1000X0110 Y1000Y0101 Z1000Z1010 Representation ARepresentation B Which representation is a good example of distributed representation? a)representation A b)representation B c)neither

Advantage of Distributed Representations Efficiency –Solve the combinatorial explosion problem: With n binary units, 2 n different representations possible. (e.g.) How many English words from a combination of 26 alphabet letters? Damage resistance –Even if some units do not work, information is still preserved – because information is distributed across a network, performance degrades gradually as function of damage –(aka: robustness, fault-tolerance, graceful degradation)

Neural Network Models Inspired by real neurons and brain organization but are highly idealized Can spontaneously generalize beyond information explicitly given to network Retrieve information even when network is damaged (graceful degradation) Networks can be taught: learning is possible by changing weighted connections between nodes