PSY105 Neural Networks 5/5 5. “Function – Computation - Mechanism”

Slides:



Advertisements
Similar presentations
Learning School of Computing, University of Leeds, UK AI23 – 2004/05 – demo 2.
Advertisements

Facebook Group: The group is called: Psych281 Spring08 Available only to University of Alberta network Sorry to be rude but… Please don’t add me as a friend.
Conditioning Bear with me. Bare with me. Beer with me. Stay focused.
Computer Vision Lecture 18: Object Recognition II
Chapter 6: Learning. Classical Conditioning Ivan Pavlov A type of learning in which a neutral stimulus acquires the ability to elicit a response. How.
PSY402 Theories of Learning Chapter 9, Theories and Applications of Aversive Conditioning.
PSY402 Theories of Learning Chapter 6, Traditional Theories.
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
Psychology 5310: Learning Lecture 1 Spring, 2015 Professor Delamater.
Artificial Neural Networks - Introduction -
Learning Learning refers to relatively permanent changes in behavior resulting from practice or experienceLearning refers to relatively permanent changes.
PSY 402 Theories of Learning Chapter 9 – Motivation.
Learning Overview F What is Learning? F Classical Conditioning F Operant Conditioning F Limits of Behaviorism F Observational Learning.
Lecture 21: Avoidance Learning & Punishment Learning, Psychology 5310 Spring, 2015 Professor Delamater.
Avoidance Conditioning Combining Classical and Operant Conditioning Classical and operant conditioning often take place in the same situation. We saw this.
Myers’ EXPLORING PSYCHOLOGY (6th Ed)
Introduction: What does phasic Dopamine encode ? With asymmetric coding of errors, the mean TD error at the time of reward is proportional to p(1-p) ->
Psyco 486 Learning and Behavioural Theory in Advertising and Marketing.
Learning What is Learning? –Relatively permanent change in behavior that results from experience (behaviorist tradition) –Can there be learning that does.
PSY402 Theories of Learning Wednesday, November 19, 2003 Chapter 6 -- Traditional Theories (Cont.)
WHS AP Psychology Unit 5: Learning (Behaviorism) Essential Task 5-2: Describe basic classical conditioning phenomena with specific attention to unconditioned.
PSY 402 Theories of Learning Chapter 9 – Motivation.
PSY 402 Theories of Learning Chapter 3 – Nuts and Bolts of Conditioning (Mechanisms of Classical Conditioning)
Paris, N.A. Kennesaw State University- M.Ed in Ad Ed program 1 Behavioral Learning Theory.
Operant Conditioning Thomas G. Bowers, Ph.D. Penn State Harrisburg.
Classical Conditioning
Learning.
Theoretical Analysis of Classical Conditioning Thomas G. Bowers, Ph.D. Penn State Harrisburg.
Chapter 6: Learning. Classical Conditioning Ivan Pavlov Terminology –Unconditioned Stimulus (UCS): evokes an unconditioned response without previous conditioning.
© 2002 John Wiley & Sons, Inc. Huffman: PSYCHOLOGY IN ACTION, 6E PSYCHOLOGY IN ACTION Sixth Edition by Karen Huffman PowerPoint  Lecture Notes Presentation.
PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should.
Learning. A. Introduction to learning 1. Why do psychologists care about learning? 2. What is and isn’t learning? IS: A relatively permanent change in.
How do we learn?  What are the manners by which you learn as a student?
1. Academic Goal: (related to this class)  List 3 behaviors/ actions you will take to reach that goal 2. Personal Goal:  List 3 behaviors/actions you.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 26- Reinforcement Learning for Robots; Brain Evidence.
Learning Adaptability is our capacity to learn new behaviors that help us cope with changing circumstances. Learning is the process of acquiring new and.
Learning Approaches to Personality Basic assumptions: 1.Changes in behavior occur in predictable ways 2.Personality is formed by person’s unique history.
Version 0.1 (c) CELEST Associative Learning.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Cognitive \ Behavioural Learning Theory Experience physically changes the structure of the nervous system, altering neural circuits that participate in.
How the environment influences our behavior. Ch 5 learning.
Neural dynamics of in vitro cortical networks reflects experienced temporal patterns Hope A Johnson, Anubhuthi Goel & Dean V Buonomano NATURE NEUROSCIENCE,
Psychology of Learning: Classical Conditioning Dr. K. A. Korb University of Jos.
Classical conditioning (Pavlov – 1899, 1927).
Chapter 2 Learning (I) Classical Conditioning Instinct  Salmon  Weaver Bird  Reflex — a simple innate behavior  Fixed action pattern (FAP) — a ~
CHANGING DIRECTIONS IN THE STUDY OF CONDITIONING.
LEE JI HOON. question 1. What is the meaning of evolutionary psychology. 2. Why is the taste aversion learning important.
Behaviouralism View person as a biological entity, similar to any other mammal Personality is shaped in an effort to adapt to environmental circumstances.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Lecture #15: Learning -- Classical Conditioning, Part I Copyright © 2002 L.A. Lowe.
Today we will focus on two basic models of learning, classical and operant conditioning. Specific topics include: Pavlov's classical conditioning and how.
It explains learning in terms of observable behaviours and how they are influenced by stimuli from the environment.
Operant Conditioning – Chapter 9 Theories of Learning October 19, 2005 Class #25.
YI, SeongBae A transition to modern: Hebb. Questions What is the main idea of Hebb’s theory if we say in a easy way? Why it is important to repeat to.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
PSY402 Theories of Learning
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
3.Learning In previous lecture, we discussed the biological foundations of of neural computation including  single neuron models  connecting single neuron.
PSY 402 Theories of Learning Chapter 3 – Nuts and Bolts of Conditioning (Mechanisms of Classical Conditioning)
Introduction to the TLearn Simulator n CS/PY 231 Lab Presentation # 5 n February 16, 2005 n Mount Union College.
What is cognitive psychology?
Psychology 3510: Learning Lecture 1
Bioagents and Biorobots David Kadleček, Michal Petrus, Pavel Nahodil
Learning A relatively permanent change in an organism’s behavior due to experience.
Classical Conditioning and prediction
Covariation Learning and Auto-Associative Memory
Learning and Memory Lap 3 Chapters 9 and 10.
Learning Theory SAC Revsion.
Introduction to Neural Network
Presentation transcript:

PSY105 Neural Networks 5/5 5. “Function – Computation - Mechanism”

Lecture 1 recap We can describe patterns at one level of description that emerge due to rules followed at a lower level of description. Neural network modellers hope that we can understand behaviour by creating models of networks of artificial neurons.

Lecture 2 recap Simple model neurons – Transmit a signal of (or between) 0 and 1 – Receive information from other neurons – Weight this information Can be used to perform any computation

Lecture 3 recap Classical conditioning is a simple form of learning which can be understood as an increase in the weight (‘associative strength’) between two stimuli (one of which is associated with an ‘unconditioned response’)

Lecture 4 recap A Hebb Rule for weight change between two neurons is: – Δ weight = activity 1 x activity 2 x learning rate constant In order to use this rule to associate two stimuli which are separated in time we need neuron activity associated with stimuli to persist in time. – This can be implemented as an ‘eligibility trace’

I present you with a robot which uses a simple neural network to acquire classically conditioned responses. It can, for example, learn to associate a warning stimulus with an upcoming wall and hence turn around before it reaches the wall. Describe an experiment which you would do to test the details of how the robot learns. Say what you would do, and what aspect(s) of the robot's learning the results would inform you of, and why.

The problem of continuous time Stimulus 1 Stimulus 2

Traces Stimulus 1 Stimulus 2

Consequences of this implementation Intensity of CS stimulus Duration of CS stimulus Intensity of UCS stimulus Duration of UCS stimulus Separation in time of CS and UCS The order in which the CS and UCS occur – (cf. Rescola-Wagner discrete time model)

We have designed an information processing system that learns associations Trials Association Sutton, R.S., Barto, A.G. (1990). Time- derivative models of pavlovian reinforcement. In Learning and Computational Neuroscience: Foundations of Adaptive Networks, M. Gabriel and J. Moore, Eds., pp MIT Press.Time- derivative models of pavlovian reinforcement

Without continued pairing Trials Association Stop pairing

Without continued pairing -> extinction Trials Association Stop pairing

Analysis of information processing systems Function (‘computational level’) Computation (‘algorithmic level’) Mechanism (‘implementational level’) Marr, David (1982) Vision. San Francisco: W.H. Freeman

Marrian analysis of classical condtioning Function: learn to predict events based on past experience Computation: Stimuli evoke ‘eligibility traces’. Hebb Rule governs changes in weights [+ other additional assumptions which are always needed when you try and make a computational recipe] Mechanism: At least one response neuron, one unconditioned stimulus neuron and one neuron for each conditioned stimulus

Kim, J. J., & Thompson, R. F. (1997). Cerebellar circuits and synaptic mechanisms involved in classical eyeblink conditioning. Trends in Neurosciences, 20(4),

Marrian analysis: a simple example Function Computation Mechanism

Theory Experiments Synthesis Analysis

Our classical conditioning networks Stimuli Responses CS 1 UCSCS 2 S-S link

Internal representation of the conditioned stimulus

The lab : using Marrian analysis to make predictions

Function What is the purpose of learning for an animal? – Does our model behave in a sensible (‘adaptive’) way when it follows our rule? – Is the rule sufficient to explain animal learning? Test: think of a way you would want the model/robot to behave, test if it does

Computation Intensity of CS stimulus Duration of CS stimulus Intensity of UCS stimulus Duration of UCS stimulus Separation in time of CS and UCS The order in which the CS and UCS occur – (cf. Rescola-Wagner discrete time model) The learning rate The rate of decay of the trace The frequency of pairing

Mechanism Stimuli Responses CS 1 UCSCS 2 S-S link

What is your prediction? What will you do to the rule or the environment? How will you know if it has been confirmed or falsified?

ψ π Ω