580.691 Learning Theory Reza Shadmehr State estimation theory.

Slides:



Advertisements
Similar presentations
TARGET DETECTION AND TRACKING IN A WIRELESS SENSOR NETWORK Clement Kam, William Hodgkiss, Dept. of Electrical and Computer Engineering, University of California,
Advertisements

Internal models, adaptation, and uncertainty
Visual Attention Attention is the ability to select objects of interest from the surrounding environment A reliable measure of attention is eye movement.
Integration of sensory modalities
Observers and Kalman Filters
The loss function, the normal equation,
Quantifying Generalization from Trial-by-Trial Behavior in Reaching Movement Dan Liu Natural Computation Group Cognitive Science Department, UCSD March,
Evaluation.
Evaluating Hypotheses
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
The Experimental Approach September 15, 2009Introduction to Cognitive Science Lecture 3: The Experimental Approach.
Understanding Perception and Action Using the Kalman filter Mathematical Models of Human Behavior Amy Kalia April 24, 2007.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Lecture 19 Simple linear regression (Review, 18.5, 18.8)
Relationships Among Variables
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Optimality in Motor Control By : Shahab Vahdat Seminar of Human Motor Control Spring 2007.
Statistical learning and optimal control:
A kinematic cost Reza Shadmehr. Subject’s performanceMinimum jerk motion Flash and Hogan, J Neurosci 1985 Point to point movements generally exhibit similar.
Learning Theory Reza Shadmehr Bayesian Learning 2: Gaussian distribution & linear regression Causal inference.
Correlation and Regression Used when we are interested in the relationship between two variables. NOT the differences between means or medians of different.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Optimal Therapy After Stroke: Insights from a Computational Model Cheol Han June 12, 2007.
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
Statistical learning and optimal control: A framework for biological learning and motor control Lecture 2: Models of biological learning and sensory- motor.
Statistical learning and optimal control: A framework for biological learning and motor control Lecture 4: Stochastic optimal control Reza Shadmehr Johns.
Learning Theory Reza Shadmehr Optimal feedback control stochastic feedback control with and without additive noise.
Motor Control. Beyond babbling Three problems with motor babbling: –Random exploration is slow –Error-based learning algorithms are faster but error signals.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
VI. Regression Analysis A. Simple Linear Regression 1. Scatter Plots Regression analysis is best taught via an example. Pencil lead is a ceramic material.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Blocking The phenomenon of blocking tells us that what happens to one CS depends not only on its relationship to the US but also on the strength of other.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Tracking with dynamics
Ch. 5 Bayesian Treatment of Neuroimaging Data Will Penny and Karl Friston Ch. 5 Bayesian Treatment of Neuroimaging Data Will Penny and Karl Friston 18.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Psychology and Neurobiology of Decision-Making under Uncertainty Angela Yu March 11, 2010.
Lecture 10: Observers and Kalman Filters
Integration of sensory modalities
Gaussian distribution & linear regression
Choice Certainty Is Informed by Both Evidence and Decision Time
Braden A. Purcell, Roozbeh Kiani  Neuron 
Alteration of Visual Perception prior to Microsaccades
Ariel Zylberberg, Daniel M. Wolpert, Michael N. Shadlen  Neuron 
Computational Mechanisms of Sensorimotor Control
Activity in Posterior Parietal Cortex Is Correlated with the Relative Subjective Desirability of Action  Michael C. Dorris, Paul W. Glimcher  Neuron 
Responses of Collicular Fixation Neurons to Gaze Shift Perturbations in Head- Unrestrained Monkey Reveal Gaze Feedback Control  Woo Young Choi, Daniel.
Learning Theory Reza Shadmehr
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Multiple Timescales of Memory in Lateral Habenula and Dopamine Neurons
Neural Mechanisms of Visual Motion Perception in Primates
A. Saez, M. Rigotti, S. Ostojic, S. Fusi, C.D. Salzman  Neuron 
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Segregation of Object and Background Motion in Visual Area MT
Greg Schwartz, Sam Taylor, Clark Fisher, Rob Harris, Michael J. Berry 
Timescales of Inference in Visual Adaptation
Mathematical Foundations of BME Reza Shadmehr
The Normalization Model of Attention
Learning Theory Reza Shadmehr
Homework 2 Let us simulate the savings experiment of Kojima et al. (2004) assuming that the learner models the hidden state of the world with a 3x1 vector.
Mathematical Foundations of BME
by Kenneth W. Latimer, Jacob L. Yates, Miriam L. R
Environmental Consistency Determines the Rate of Motor Adaptation
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Learning Theory Reza Shadmehr State estimation theory

AB Subject was instructed to pull on a knob that was fixed on a rigid wall. A) EMG recordings from arm and leg muscles. Before biceps is activated, the brain activates the leg muscles to stabilize the lower body and prevent sway due to the anticipated pulling force on the upper body. B) When a rigid bar is placed on the upper body, the leg muscles are not activated when biceps is activated. (Cordo and Nashner, 1982)

AB 100 Hz Effect of eye movement on the memory of a visual stimulus. In the top panel, the filled circle represents the fixation point, the asterisk indicates the location of the visual stimulus, and the dashed circle indicates the receptive field a cell in the LIP region of the parietal cortex. A) Discharge to the onset and offset of a visual stimulus in the cell’s receptive field. Abbreviations: H. eye, horizontal eye position; Stim, stimulus; V. eye, vertical eye position. B) Discharge during the time period in which a saccade brings the stimulus into the cell’s receptive field. The cell’s discharge increased before the saccade brought the stimulus into the cell’s receptive field. (From (Duhamel et al., 1992)

Why predict sensory consequences of motor commands?

Subject looked at a moving cursor while a group of dots appeared on the screen for 300ms. In some trials the dots would remain still (A) while in other trials they would move together left or right with a constant speed (B). Subject indicated the direction of motion of the dots. From this result, the authors estimated the speed of subjective stationarity, i.e., the speed of dots for which the subject perceived them to be stationary. C) The unfilled circles represent performance of control subjects. Regardless of the speed of the cursor, they perceived the dots to be stationary only if their speed was near zero. The filled triangles represent performance of subject RW. As the speed of the cursor increased, RW perceived the dots to be stationary if their speed was near the speed of the cursor. (Haarmeier et al., 1997)

Disorders of agency in schizophrenia relate to an inability to compensate for sensory consequences of self-generated motor commands. In a paradigm similar to that shown in the last figure, volunteers estimated whether during motion of a cursor the background moved to the right or left. By varying the background speed, at each cursor speed the experimenters estimated the speed of perceptual stationarity, i.e., the speed of background motion for which the subject saw the background to be stationary. They then computed a compensation index as the difference between speed of eye movement and speed of background when perceived to be stationary, divided by speed of eye movement. The subset of schizophrenic patients who had delusional symptoms showed a greater deficit than control in their ability to compensate for sensory consequences of self-generated motor commands. (From (Lindner et al., 2005))

A B C D Combining predictions with observations

Parameter variance depends only on input selection and noise A noisy process produces n data points and we form an ML estimate of w. We run the noisy process again with the same sequence of x’s and re- estimate w: The distribution of the resulting w will have a var- cov that depends only on the sequence of inputs, the bases that encode those inputs, and the noise sigma.

The Gaussian distribution and its var-cov matrix A 1-D Gaussian distribution is defined as In n dimensions, it generalizes to When x is a vector, the variance is expressed in terms of a covariance matrix C, where ρij corresponds to the degree of correlation between variables xi and xj

x1 and x2 are positively correlatedx1 and x2 are not correlatedx1 and x2 are negatively correlated

Parameter uncertainty: Example 1 Input history: x1 was “on” most of the time. I’m pretty certain about w1. However, x2 was “on” only once, so I’m uncertain about w2.

Parameter uncertainty: Example 2 Input history: x1 and x2 were “on” mostly together. The weight var-cov matrix shows that what I learned is that: I do not know individual values of w1 and w2 with much certainty. x1 appeared slightly more often than x2, so I’m a little more certain about the value of w1.

Parameter uncertainty: Example 3 Input history: x2 was mostly “on”. I’m pretty certain about w2, but I am very uncertain about w1. Occasionally x1 and x2 were on together, so I have some reason to believe that:

Effect of uncertainty on learning rate When you observe an error in trial n, the amount that you should change w should depend on how certain you are about w. The more certain you are, the less you should be influenced by the error. The less certain you are, the more you should “pay attention” to the error. mx1 Kalman gain error Rudolph E. Kalman (1960) A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82 (Series D): Research Institute for Advanced Study 7212 Bellona Ave, Baltimore, MD

Example of the Kalman gain: running estimate of average Kalman gain: learning rate decreases as the number of samples increase As n increases, we trust our past estimate w(n-1) a lot more than the new observation y(n) Past estimateNew measure w(n) is the online estimate of the mean of y

Example of the Kalman gain: running estimate of variance sigma_hat is the online estimate of the var of y

We note that P is simply the var-cov matrix of our model weights. It represents the uncertainty in our estimates of the model parameters. We want to update the weights in such a way as to minimize the trace of this variance. The trace is the sum of the squared errors between our estimates of w and the true estimates. Some observations about variance of model parameters

Trace of parameter var-cov matrix is the sum of squared parameter errors Our objective is to find learning rate k (Kalman gain) such that we minimize the sum of the squared error in our parameter estimates. This sum is the trace of the P matrix. Therefore, given observation y(n), we want to find k such that we minimize the variance of our estimate w.

Objective: adjust learning gain in order to minimize model uncertainty a prior variance of parameters a posterior variance of parameters my estimate of w* before I see y in trial n, given that I have seen y up to n-1 error in trial n my estimate after I see y in trial n Hypothesis about data observation in trial n

Evolution of parameter uncertainty

Find K to minimize trace of uncertainty

scalar Find K to minimize trace of uncertainty

The Kalman gain If I have a lot of uncertainty about my model, P is large compared to sigma. I will learn a lot from the current error. If I am pretty certain about my model, P is small compared to sigma. I will tend to ignore the current error.

Update of model uncertainty Model uncertainty decreases with every data point that you observe.

In this model, we hypothesize that the hidden variables, i.e., the “true” weights, do not change from trial to trial. Observed variables Hidden variable A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

In this model, we hypothesize that the hidden variables change from trial to trial. A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

Learning rate is proportional to the ratio between two uncertainties: my model vs. my measurement. After we observe an input x, the uncertainty associated with the weight of that input decreases. Because of state update noise Q, uncertainty increases as we form the prior for the next trial. Uncertainty about my model parameters Uncertainty about my measurement

2. Bayesian state estimation 3. Causal Inference 4. The influence of priors 5. Behaviors that are not Bayesian

Comparison of Kalman gain to LMS See derivation of this in homework In the Kalman gain approach, the P matrix depends on the history of all previous and current inputs. In LMS, the learning rate is simply a constant that does not depend on past history. With the Kalman gain, our estimate converges on a single pass over the data set. In LMS, we don’t estimate the var-cov matrix P on each trial, but we will need multiple passes before our estimate converges.

High noise in the state update model produces increased uncertainty in model parameters. This produces high learning rates. High noise in the measurement also increases parameter uncertainty. But this increase is small relative to measurement uncertainty. Higher measurement noise leads to lower learning rates. Effect of state and measurement noise on the Kalman gain

Learning rate is higher in a state model that has high auto- correlations (larger a). That is, if the learner assumes that the world is changing slowly (a is close to 1), then the learner will have a large learning rate Effect of state transition auto-correlation on the Kalman gain

Kalman filter as a model of animal learning Suppose that x represents inputs from the environment: a light and a tone. Suppose that y represents rewards, like a food pellet. Animal’s model of the experimental setupAnimal’s expectation on trial n Animal’s learning from trial n

Various forms of classical conditioning in animal psychology Table from Peter Dayan Not explained by LMS, but predicted by the Kalman filter.

Sharing Paradigm Train: {x1,x2} -> 1 Test: x1 -> ?, x2 -> ? Result: x1->0.5, x2-> y yhat w1 w P11 P k1 k x1 x2 y * y yhat w1 w2 Learning with Kalman gainLMS

Blocking Kamin (1968) Attention-like processes in classical conditioning. In: Miami symposium on the prediction of behavior: aversive stimulation (ed. MR Jones), pp Univ. of Miami Press. Kamin trained an animal to continuously press a lever to receive food. He then paired a light (conditioned stimulus) and a mild electric shock to the foot of the rat (unconditioned stimulus). In response to the shock, the animal would reduce the lever-press activity. Soon the animal learned that the light predicted the shock, and therefore reduced lever pressing in response to the light. He then paired the light with a tone when giving the electric shock. After this second stage of training, he observed than when the tone was given alone, the animal did not reduce its lever pressing. The animal had not learned anything about the tone.

Blocking Paradigm Train: x1 -> 1, {x1,x2} -> 1 Test: x2 -> ?, x1 -> ? Result: x2 -> 0, x1 -> x1 x2 y * y yhat w1 w P11 P k1 k2 Learning with Kalman gainLMS w1 w y yhat

Backwards Blocking Paradigm Train: {x1,x2} -> 1, x1 -> 1 Test: x2 -> ? Result: x2 -> x1 x2 y * y yhat w1 w P11 P k1 k2 Learning with Kalman gainLMS y yhat w1 w2

Different output models Case 1: the animal assumes an additive model. If each stimulus predicts one reward, then if the two are present together, they predict two rewards. Suppose that x represents inputs from the environment: a light and a tone. Suppose that y represents a reward, like a food pellet. Case 2: the animal assumes a weighted average model. If each stimulus predicts one reward, then if the two are present together, they still predict one reward, but with higher confidence. The weights b1 and b2 should be set to the inverse of the variance (uncertainty) with which each stimulus x1 and x2 predicts the reward.

General case of the Kalman filter A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial nx1 mx1

How to set the initial var-cov matrix In homework, we will show that in general: Now if we have absolutely no prior information on w, then before we see the first data point P(1|0) is infinity, and therefore its inverse in zero. After we see the first data point, we will be using the above equation to update our estimate. The updated estimate will become: A reasonable and conservative estimate of the initial value of P would be to set it to the above value. That is, set:

Data fusion Suppose that we have two sensors that independently measure something. We would like to combine their measures to form a better estimate. What should the weights be? Suppose that we know that sensor 1 gives us measurement y1 and has Gaussian noise with variance: And similarly, sensor 2 has gives us measurement y2 and has Gaussian noise with variance: A good idea is to weight each sensor inversely proportional to its noise:

To see why this makes sense, let’s put forth a generative model that describes our hypothesis about how the data that we are observing is generated: Observed variables Hidden variable Data fusion via Kalman filter

See homework for this priors our first observation variance of our posterior estimate

Notice that after we make our first observation, the variance of our posterior is better than the variance of either sensor. What our sensors tell us The real world

Sensor 1Sensor 2 Combined Sensor 1 Sensor 2 Combined Combining equally noisy sensorsCombining sensors with unequal noise Mean of the posterior, and its variance probability

Puzzling results: Savings and memory despite “washout” Gain=eye displacement divided by target displacement Result 1: After changes in gain, monkeys exhibit recall despite behavioral evidence for washout. Kojima et al. (2004) Memory of learning facilitates saccade adaptation in the monkey. J Neurosci 24:

Result 2: Following changes in gain and a long period of washout, monkeys exhibit no recall. Result 3: Following changes in gain and a period of darkness, monkeys exhibit a “jump” in memory. Puzzling results: Improvements in performance without error feedback Kojima et al. (2004) J Neurosci 24:7531.

The learner’s hypothesis about the structure of the world A 1. The world has many hidden states. What I observe is a linear combination of these states. 2.The hidden states change from trial to trial. Some change slowly, others change fast. 3.The states that change fast have larger noise than states that change slow. slow system fast system state transition equation output equation

y yhat w1 w2 Simulations for savings x1 x2 y * The critical assumption is that in the fast system, there is much more noise than in the slow system. This produces larger learning rate in the fast system.

x1 x2 y * y yhat w1 w2 Simulations for spontaneous recovery despite zero error feedback error clamp In the error clamp period, estimates are made yet the weight update equation does not see any error. Therefore, the effect of Kalman gain in the error- clamp period is zero. Nevertheless, weights continue to change because of the state update equations. The fast weights rapidly rebound to zero, while the slow weights slowly decline. The sum of these two changes produces a “spontaneous recovery” after washout.

Mean gain at start of recovery = 0.83 Mean gain at start of recovery = 0.86 Mean gain at end of recovery = 0.87 % gain change = 1.2% gain change = 14.4% Mean gain at end of recovery = 0.95 Target extinguished during recoveryTarget visible during recovery Changes in representation without error feedback Seeberger et al. (2002) Brain Research 956:

Massed vs. Spaced training: effect of changing the inter-trial interval Learning reaching in a force field ITI = 8min ITI = 1min Discrimination performance (sec) Rats were trained on an operant conditional discrimination in which an ambiguous stimulus (X) indicated both the occasions on which responding in the presence of a second cue (A) would be reinforced and the occasions on which responding in the presence of a third cue (B) would not be reinforced (X --> A+, A-, X --> B-, B+). Both rats with lesions of the hippocampus and control rats learned this discrimination more rapidly when the training trials were widely spaced (intertrial interval of 8 min) than when they were massed (intertrial interval of 1 min). With spaced practice, lesioned and control rats learned this discrimination equally well. But when the training trials were massed, lesioned rats learned more rapidly than controls. Han, J.S., Gallagher, M. & Holland, P. Hippocampus 8: (1998)

Performance in a water maze Sisti, Glass, Shors (2007) Neurogenesis and the spacing effect: learning over time enhances memory and the survival of new neurons. Learning and Memory 14: trials a day for 4 days 16 trials in one day Massed vs. Spaced training: effect of changing the inter-trial interval

A A AAA The learner’s hypothesis about the structure of the world 1. The world has many hidden states. What I observe is a linear combination of these states. 2.The hidden states change from trial to trial. Some change slowly, others change fast. 3.The states that change fast have larger noise than states that change slow. 4.The state changes can occur more frequently than I can make observations. Inter-trial interval

y yhat y yhat ITI=2 ITI=20

When there is an observation, the uncertainty for each hidden variable decreases proportional to its Kalman gain. When there are no observations, the uncertainty decreases in proportion to A squared, but increases in proportion to state noise Q P P11 Uncertainty for the slow stateUncertainty for the fast state ITI=20 Beyond a minimum ITI, increased ITI continues to increase the uncertainty of the slow state but has little effect on the fast state uncertainty. The longer ITI increases the total learning by increasing the slow state’s sensitivity to error.

y yhatspaced yhatmassed w1massed w1spaced w2massed w2spaced k1massed k1spaced k2massed k2spaced Observation number Performance in spaced training depends largely on the slow state. Therefore, spaced training produces memories that decay little with passage of time.

ITI=14 ITI=2 ITI=98 Performance during training Test at 1 week ITI=14 ITI=2 ITI=98 Testing at 1 day or 1 week (averaged together) Pavlik, P. I. and Anderson, J. R. ( 2005). Practice and forgetting effects on vocabulary memory: An activation-based model of the spacing effect. Cognitive Science, 29, Spaced training results in better retention in learning a second language On Day 1, subjects learned to translate written Japanese words into English. They were given a Japanese word (written phonetically), and then given the English translation. This “study trial” was repeated twice. Afterwards, the were given the Japanese word and had to write the translation. If their translation was incorrect, the correct translation was given. The ITI between word repetition was either 2, 14, or 98 trials. Performance during training was better when the ITI was short. However, retention was much better for words that were observed with longer ITI. (The retention test involved two groups; one at 1 day and other at 7 days. Performance was slightly better for the 1 day group but the results were averaged in this figure.)

DM Wolpert et al. (1995) Science 269:1880 Motor command Sensory measurement State of our body Application of Kalman filter to problems in sensorimotor control

When we move our arm in darkness, we may estimate the position of our hand based on three sources of information: proprioceptive feedback. a forward model of how the motor commands have moved our arm. by combining our prediction from the forward model with actual proprioceptive feedback. Experimental procedures: Subject holds a robotic arm in total darkness. The hand is briefly illuminated. An arrow is displayed to left or right, showing which way to move the hand. In some cases, the robot produces a constant force that assists or resists the movement. The subject slowly moves the hand until a tone is sounded. They use the other hand to move a mouse cursor to show where they think their hand is located. DM Wolpert et al. (1995) Science 269:1880

assistive resistive

Motor command Sensory measurement State of the body A B C The generative model, describing actual dynamics of the limb The model for estimation of sensory state from sensory feedback For whatever reason, the brain has an incorrect model of the arm. It overestimates the effect of motor commands on changes in limb position.

Initial conditions: the subject can see the hand and has no uncertainty regarding its position and velocity Forward model of state change and feedback Actual observation Estimate of state incorporates the prior and the observation Forward model to establish the prior and the uncertainty for the next state

Timesec Actual and estimated position Kalman gain Bias at end of movement (cm) Variance at end of movement (cm^2) Total movement time (sec) Motor command u Time of “beep” For movements of various length A single movement Pos (cm)