Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical learning and optimal control:

Similar presentations


Presentation on theme: "Statistical learning and optimal control:"— Presentation transcript:

1 Statistical learning and optimal control:
A framework for biological learning and motor control Lecture 1: Iterative learning and the Kalman filter Reza Shadmehr Johns Hopkins School of Medicine

2 Belief about state of body and world Measured sensory consequences
Stochastic optimal control State change Goal selector Motor command generator Body + environment Belief about state of body and world Predicted sensory consequences Kalman filter Parameter estimation Integration Forward model Sensory system Proprioception Vision Audition Measured sensory consequences

3 Results from classical conditioning

4 Effect of time on memory: spontaneous recovery

5 Effect of time on memory: inter-trial interval and retention
ITI=14 ITI=2 ITI=98 Performance during training Test at 1 week Testing at 1 day or 1 week (averaged together) Effect of time on memory: inter-trial interval and retention

6 Integration of predicted state with sensory feedback

7 Choice of motor commands: optimality in saccades and reaching movements
eye velocity deg/sec 0.05 0.1 0.15 0.2 0.25 100 200 300 400 500 Time (sec) 5 10 15 30 40 50 Saccade size

8 Helpful reading: Mathematical background Raul Rojas, The Kalman Filter. Freie Universitat Berlin. N.A. Thacker and A.J. Lacey, Tutorial: The Kalman Filter. University of Manchester. Application to animal learning Peter Dayan and Angela J. Yu (2003) Uncertainty and learning. IETE Journal of Research 49: Application to sensorimotor control D. Wolpert, Z. Ghahramani, MI Jordan (1995) An internal model for sensorimotor integration. Science

9 Linear regression, maximum likelihood, and parameter uncertainty
A noisy process produces n data points and we form an ML estimate of w. We run the noisy process again with the same sequence of x’s and re-estimate w: The distribution of the resulting w will have a var-cov that depends only on the sequence of inputs, the bases that encode those inputs, and the noise sigma.

10 Bias of the parameter estimates for a given X
How does the ML estimate behave in the presence of noise in y? The “true” underlying process What we measured Our model of the process nx1 vector ML estimate: Because e is normally distributed: In other words:

11 Variance of the parameter estimates for a given X
Matrix of constants vector of random variables Assume: For a given X, the ML (or least square) estimate of our parameter has this normal distribution: mxm

12 The Gaussian distribution and its var-cov matrix
A 1-D Gaussian distribution is defined as In n dimensions, it generalizes to When x is a vector, the variance is expressed in terms of a covariance matrix C, where ρij corresponds to the degree of correlation between variables xi and xj

13 x1 and x2 are positively correlated x1 and x2 are not correlated
x1 and x2 are negatively correlated -2 -1 1 2 3 -3 -2 -1 1 2 3 4 -2 -1 1 2 3

14 Parameter uncertainty: Example 1
Input history: 1 0.5 -0.5 0.5 1 1.5 2 x1 was “on” most of the time. I’m pretty certain about w1. However, x2 was “on” only once, so I’m uncertain about w2.

15 Parameter uncertainty: Example 2
Input history: 1 0.5 -0.5 0.5 1 1.5 2 x1 and x2 were “on” mostly together. The weight var-cov matrix shows that what I learned is that: I do not know individual values of w1 and w2 with much certainty. x1 appeared slightly more often than x2, so I’m a little more certain about the value of w1.

16 Parameter uncertainty: Example 3
Input history: 1 0.5 -0.5 0.5 1 1.5 2 x2 was mostly “on”. I’m pretty certain about w2, but I am very uncertain about w1. Occasionally x1 and x2 were on together, so I have some reason to believe that:

17 Effect of uncertainty on learning rate
When you observe an error in trial n, the amount that you should change w should depend on how certain you are about w. The more certain you are, the less you should be influenced by the error. The less certain you are, the more you should “pay attention” to the error. mx1 mx1 error Kalman gain Rudolph E. Kalman (1960) A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82 (Series D): Research Institute for Advanced Study 7212 Bellona Ave, Baltimore, MD

18 Example of the Kalman gain: running estimate of average
w(n) is the online estimate of the mean of y Past estimate New measure As n increases, we trust our past estimate w(n-1) a lot more than the new observation y(n) Kalman gain: learning rate decreases as the number of samples increase

19 Example of the Kalman gain: running estimate of variance
sigma_hat is the online estimate of the var of y

20 Objective: adjust learning gain in order to minimize model uncertainty
Hypothesis about data observation in trial n my estimate of w* before I see y in trial n, given that I have seen y up to n-1 error in trial n my estimate after I see y in trial n parameter error before I saw the data (a prior error) parameter error after I saw the data point (a posterior error) a prior var-cov of parameter error a posterior var-cov of parameter error

21 Some observations about model uncertainty
We note that P(n) is simply the var-cov matrix of our model weights. It represents the uncertainty in our model. We want to update the weights so to minimize a measure of this uncertainty.

22 Trace of parameter var-cov matrix is the sum of squared parameter errors
Our objective is to find learning rate k (Kalman gain) such that we minimize the sum of the squared error in our parameter estimates. This sum is the trace of the P matrix. Therefore, given observation y(n), we want to find k such that we minimize the variance of our estimate w.

23 Find K to minimize trace of uncertainty

24 Find K to minimize trace of uncertainty
scalar

25 The Kalman gain If I have a lot of uncertainty about my model, P is large compared to sigma. I will learn a lot from the current error. If I am pretty certain about my model, P is small compared to sigma. I will tend to ignore the current error.

26 Update of model uncertainty
Model uncertainty decreases with every data point that you observe.

27 Hidden variable In this model, we hypothesize that the hidden variables, i.e., the “true” weights, do not change from trial to trial. Observed variables A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

28 In this model, we hypothesize that the hidden variables change from trial to trial.
A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

29 Uncertainty about my model parameters
Uncertainty about my measurement Learning rate is proportional to the ratio between two uncertainties: my model vs. my measurement. After we observe an input x, the uncertainty associated with the weight of that input decreases. Because of state update noise Q, uncertainty increases as we form the prior for the next trial.

30 Comparison of Kalman gain to LMS
See derivation of this in homework In the Kalman gain approach, the P matrix depends on the history of all previous and current inputs. In LMS, the learning rate is simply a constant that does not depend on past history. With the Kalman gain, our estimate converges on a single pass over the data set. In LMS, we don’t estimate the var-cov matrix P on each trial, but we will need multiple passes before our estimate converges.

31 Effect of state and measurement noise on the Kalman gain
2 4 6 8 10 2.5 3 3.5 4.5 5 2 4 6 8 10 2.5 3 3.5 4.5 5 2 4 6 8 10 0.65 0.7 0.75 0.8 2 4 6 8 10 0.5 0.55 0.6 0.65 0.7 0.75 0.8 High noise in the state update model produces increased uncertainty in model parameters. This produces high learning rates. High noise in the measurement also increases parameter uncertainty. But this increase is small relative to measurement uncertainty. Higher measurement noise leads to lower learning rates.

32 Effect of state transition auto-correlation on the Kalman gain
2 4 6 8 10 1 3 5 2 4 6 8 10 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Learning rate is higher in a state model that has high auto-correlations (larger a). That is, if the learner assumes that the world is changing slowly (a is close to 1), then the learner will have a large learning rate.


Download ppt "Statistical learning and optimal control:"

Similar presentations


Ads by Google