Split-Brain Studies What do you see? “Nothing”

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Artificial Neural Network
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Simple Neural Nets For Pattern Classification
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 15: Introduction to Artificial Neural Networks Martin Russell.
COGNITIVE NEUROSCIENCE
Chapter Seven The Network Approach: Mind as a Web.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
AI History, Philosophical Foundations Part 2. Some highlights from early history of AI Gödel’s theorem: 1930 Turing machines: 1936 McCulloch and Pitts.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Chapter 2 Single Layer Feedforward Networks
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Chapter 18 Connectionist Models
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Chapter 6 Neural Network.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Information Processing
Today’s Lecture Neural networks Training
Fundamental ARTIFICIAL NEURAL NETWORK Session 1st
Neural networks.
Big data classification using neural network
Fall 2004 Perceptron CS478 - Machine Learning.
Other Classification Models: Neural Network
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Chapter 2 Single Layer Feedforward Networks
Other Classification Models: Neural Network
Real Neurons Cell structures Cell body Dendrites Axon
Neural Networks Dr. Peter Phillips.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSSE463: Image Recognition Day 17
of the Artificial Neural Networks.
network of simple neuron-like computing elements
CSSE463: Image Recognition Day 17
G5AIAI Introduction to AI
Backpropagation.
CSSE463: Image Recognition Day 17
Neural Networks References: “Artificial Intelligence for Games”
Artificial Intelligence 12. Two Layer ANNs
Learning linguistic structure with simple recurrent neural networks
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
David Kauchak CS51A Spring 2019
The Network Approach: Mind as a Web
A task of induction to find patterns
Introduction to Neural Network
David Kauchak CS158 – Spring 2019
Word representations David Kauchak CS158 – Fall 2016.
A task of induction to find patterns
Lecture 16. Classification (II): Practical Considerations
Pattern Recognition: Statistical and Neural
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Split-Brain Studies What do you see? “Nothing” Left Visual Field-> Right Hemisphere -> Can’t Speak

Split-Brain Studies What do you see? “Triangle” Right Visual Field-> Left Hemisphere -> Can Speak

Split-Brain Studies Point to correct match “Heart” Left Visual Field-> Right Hemisphere -> Can understand Simple words -> Can Point

Language in both hemispheres Split-Brain Studies Language in both hemispheres “What is your name?” “Bob”

Language in both hemispheres Split-Brain Studies Language in both hemispheres “What is your name?” “Bob”

Language in both hemispheres Split-Brain Studies Language in both hemispheres “What do you want to be When you grow up?” “I want to be a director of independent films And then sell out to major studios and Make a bundle”

Language in both hemispheres Split-Brain Studies Language in both hemispheres “What do you want to be When you grow up?” “I want to be a professional hockey player”

Language in both hemispheres Split-Brain Studies Language in both hemispheres “What’s your favorite CD? “Brittany Spears”

Language in both hemispheres Split-Brain Studies Language in both hemispheres “What’s your favorite CD? “Ozzy Osbourne – Heavy Metal”

Split-Brain Studies Two people in this guys head? More than two people? How many in your head? “Feels like” one person, but is this an illusion?

Studying “Experience” or “Consciousness” Measurement Problems Measure experience by reports of subject Reports of experience are pretty fuzzy Can study our own experience But this is pretty limited What’s it like to be a split-brain patient? Only real way to know is to zap your cortex There are limits to curiosity

What’s a model? A system is a structure that provides a mapping to the thing being modelled A model of solar system Big center thing (maps to sun) Small things going around big thing (maps to planets) Leaves some stuff out Paths of objects are elliptical Solar flares

3 FROG Symbolic Models What are symbols? Symbols are things that represent something else 3 (the number three) FROG

Symbolic Models What are symbols? Symbols are things that represent something else Semantics / Intentionality one “one” two “two” three “three” four “four” numbers marks on paper

Symbolic Models What are symbols? Symbols are things that represent something else isomorphic relationship one “I” two “II” three “III” four “IV” numbers marks on paper

Symbolic Models What are symbols? Symbols are things that represent something else homomorphic relationship “cat” “bird” animals marks on paper

Symbolic Models What are symbols? Symbols are things that represent something else “chicken (McNuggets)” “chicken (bird)” animals marks on paper

Symbolic Models What are symbols? Symbols are things that represent something else x “chicken (McNuggets)” “chicken (bird)” animals marks on paper

Symbolic Models What are symbols? Symbols are things that represent something else quasi-homomorphic relationship (Q-morph) “chicken (McNuggets)” “chicken (bird)” animals marks on paper

Symbolic Models What are symbols? Symbols are things that represent something else. A set of symbols and a set of items in the world. A correspondence can be drawn between elements in each set. This correspondence can be: Isomorphic (one-to-one) Homomorphic (many-to-one) Q-morphic (approximate many-to-one, with errors)

Symbolic Models The first symbolic computational model: General Problem Solver (GPS) (by Simon and Newell) But first, an example problem...

Tower of Hanoi Puzzle

Symbolic Models General Problem Solver (GPS) Uses IF-THEN rules, called productions Uses goals, and breaks them into sub-goals. Back to the example problem...

Tower of Hanoi Puzzle Goal: Move stack[1] to peg C Production: disk[1] is not on peg C, and is not free  subgoal: move stack[2] to peg B 4 3 Stack[1] 2 1 A B C

Tower of Hanoi Puzzle Goal: Move stack[2] to peg B Production: disk[2] is not on peg B, and is not free  subgoal: move stack[3] to peg C 4 Stack[2] 3 2 1 A B C

Tower of Hanoi Puzzle Goal: Move stack[3] to peg C Production: disk[3] is not on peg C, and is not free  subgoal: move stack[4] to peg B 4 Stack[3] 3 2 1 A B C

Tower of Hanoi Puzzle Goal: Move stack[4] to peg B Production: disk[4] is not on peg B, but is free!  so move disk[4] to peg B Stack[4] 4 3 2 1 A B C

Tower of Hanoi Puzzle Goal: Move stack[4] to peg B Production: disk[4] is on peg B, and is free, but there is no stack[5]  so we’re done with this goal! 3 2 1 4 A B C

Tower of Hanoi Puzzle Previous Goal: Move stack[3] to peg C Production: disk[3] is not on peg C, but it is free  so move disk[3] to peg C 2 1 4 3 A B C

Tower of Hanoi Puzzle Previous Goal: Move stack[3] to peg C Production: disk[3] is on peg C, and it is free  so subgoal: move stack[4] to peg C 2 1 4 3 A B C

Tower of Hanoi Puzzle Goal: Move stack[4] to peg C Production: disk[4] is not on peg C, and it is free  so move disk[4] to peg C 2 1 4 3 A B C

Tower of Hanoi Puzzle Goal: Move stack[4] to peg C Production: disk[4] is not on peg C, and it is free  so move disk[4] to peg C 2 4 1 3 A B C

Tower of Hanoi Puzzle Goal: Move stack[4] to peg C Production: disk[4] is on peg C, and is free, but there is no stack[5]  so done with this goal! 2 4 1 3 A B C

Tower of Hanoi Puzzle Previous Goal: Move stack[3] to peg C Production: disk[3] is on peg C and has stack[4] on top of it  so done with this goal! 2 4 1 3 A B C

Tower of Hanoi Puzzle Previous Goal: Move stack[2] to peg B Production: disk[2] is not on peg B, and is free,  so move disk[2] to peg B 4 1 2 3 A B C

Tower of Hanoi Puzzle Previous Goal: Move stack[2] to peg B Production: disk[2] is on peg B, and is free,  so set up a subgoal to move stack[3] onto peg B 4 1 2 3 A B C

Tower of Hanoi Puzzle ….and so on….. 4 1 2 3 A B C

Tower of Hanoi Puzzle DONE! 4 3 2 1 A B C

Symbolic Models General Problem Solver (GPS) Uses IF-THEN rules, called productions Uses goals, and breaks them into sub-goals. ACT* Three components: Production Memory (IF-THEN rules) Declarative Memory (facts about the world) Working Memory (information currently being processed)

ACT* Declarative Memory Production Memory retrieval storage execution match Working Memory perception action

ACT* Declarative Memory Production Memory retrieval storage execution busy intersection IF (driving) and (stop sign) THEN brake! retrieval storage execution match Working Memory I’m driving! Brake! Stop sign! perception action

Symbolic Models General Problem Solver (GPS) Uses IF-THEN rules, called productions Uses goals, and breaks them into sub-goals. ACT* Production Memory, Declarative Memory, Working Memory

Symbolic Models: Summary The basic units of computation are symbols Decisions, actions, and problem-solving are accomplished through productions: IF-THEN like statements in memory Usually they divide up memory into different kinds, like production memory, declarative memory, working memory. All thought is seen as manipulation of symbols in these different areas of memory. WHAT THEY ARE GOOD AT: Problem solving, reasoning, language, logic. WHAT THEY ARE BAD AT: Perception, memory storage and retrieval, low-level processing.

Where do symbols come from? These systems assume symbols are fully-formed But where do we get our symbols? Born with this mass of cortex Somehow we end up with symbols that map to things in world Big problem We need a model for how we learn things

Artificial Brains? input input layer hidden layer output layer

Artificial Neural Nets Some Properties of Artificial Neural Nets Distributed Representation: Ideas, thoughts, concepts, memories, are all represented in the brain as patterns of activation across a large number of neurons. As a result, there is a lot of redundancy in neural representation. Graceful Degradation: Performance of the system decreases gradually as the system is damaged. Learning: When neurons are active at the same time, the strength of the connection between them to increase. In artificial nets, this is called the Hebb Rule.

Artificial Neural Nets More Properties of Artificial Neural Nets Generalization: Because of how the network learns, and its distributed representation, it can respond to inputs that it was never officially trained on, generalizing based on similarity to things it was trained on. Distributed Processing: Not only representation, but processing is distributed, too, so there is no central controlling function, or CPU, in the brain. It is more cooperative.

Another Approach + Lets look at the biology of the eye! The first artificial neural model: The Perceptron +

The Basic Perceptron a 1 1 a b 1 b a AND b

The Basic Perceptron a 1 1 1 a b 1 1 b a OR b

The Basic Perceptron a 1 1 a b 1 1 b a XOR b

The Basic Perceptron a 1 1 a b linearly separable 1 b a AND b

The Basic Perceptron a 1 1 1 a b 1 linearly separable 1 b a OR b

The Basic Perceptron a a b 1 b NOT linearly separable a XOR b

The Basic Perceptron What the perceptron can’t do: Exclusive OR 1 1 a a 1 1 b

The Basic Perceptron What the perceptron can’t do: Exclusive OR Even/Odd discrimination Total inputs: 0 1 2 3 4 5 6 7 ouput: 1 0 1 0 1 0 1 0

The Basic Perceptron What the perceptron can’t do: Exclusive OR Even/Odd discrimination Inside/Outside discrimination

The Basic Perceptron What the perceptron can’t do: Exclusive OR Even/Odd discrimination Inside/Outside discrimination Open/Closed discrimination

The Basic Perceptron BIG PROBLEMS What the perceptron can’t do: Exclusive OR Even/Odd discrimination Inside/Outside discrimination Open/Closed discrimination BIG PROBLEMS

The Multi-Layer Perceptron 1 1 a b 1 1 b a XOR b

The Multi-Layer Perceptron AND 1 1 - a b + 1 OR 1 b a XOR b

The Three-Layer Network input input input layer input layer hidden layer output layer

The Two-Layer Network input input layer output layer

The Two-Layer Network input input layer output layer

The Three-Layer Network input input input layer input layer hidden layer output layer

The Three-Layer Network input input input layer input layer hidden layer output layer

The Three-Layer Network input input input layer input layer ? ? ? ? ? ? ? hidden layer output layer

The Three-Layer Network input input input layer input layer hidden layer output layer Back-propagation is the learning procedure that allows you to adjust the weights in multi-layer networks to train them to respond correctly.

The Hopfield Network (not all connections are shown)

The Hopfield Network THE END OF CLASS IS NEAR Pattern Completion

Quick Summary Symbolic Models: Basic Units are Symbols Local Representation: Individual elements have meaning Use productions, goals (means-ends analysis), working memory Two examples are GPS and ACT* Are good at problem solving, reasoning, logic, and language Neural Network Models: Basic Units are artificial neurons Distributed Representation: Patterns across elements have meaning Use activation, learning (Hebb Rule, Backpropagation) Two examples are layered networks and Hopfield networks Are good at pattern matching, memory storage/retrieval