Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mathematical Foundations of BME

Similar presentations


Presentation on theme: "Mathematical Foundations of BME"— Presentation transcript:

1 580.704 Mathematical Foundations of BME
Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function

2 Multivariate regression:
Review of regression Multivariate regression: Batch algorithm Steepest descent LMS

3 Finding the minimum of a function in a single step
Taylor series expansion (If J is quadratic, otherwise more terms here)

4 Newton-Raphson method
Suppose we want to find the delta that moves our weight from its current value to the optimal value. The cost function J evaluated at wstar can be approximated as a Taylor series expansion. Because J is a quadratic, there are no higher derivates than 2. To go from the third line to the fourth line, we assume that in the 3rd line, the derivatives are evaluated and therefore constant.

5 The gradient of the loss function
Newton-Raphson

6 The gradient of the loss function

7 LMS algorithm with Newton-Raphson
Steepest descent algorithm Note: is a singular matrix. LMS

8 Weighted Least Squares
Suppose some data points are more important than others. We want to weight the errors in matching those data points more.

9 How to handle artifacts in FMRI data
Diedrichsen and Shadmehr, NeuroImage (2005) In fMRI, we typically measure the signal intensity from N voxels at acquisition time t=1…T. Each of these T measurements constitutes an image. We assume that the time series of voxel n is an arbitrary linear function of the design matrix X plus a noise term: T x p design matrix p x 1 vector T x 1 column vector If one source of noise is due to random discrete events, for example, artifacts arising from the participant moving their jaw, then only some images will be influenced, violating the assumption of a stationary noise process. To relax this assumption, a simple approach is to allow the variance of noise in each image to be scaled by a separate parameter. Under the temporal independence assumption, the variance-covariance matrix of the noise process might be: a variance scaling parameter for the i-th time that the voxel was imaged

10 Discrete events (e.g., swallowing) will impact only those images that were acquired during the event. What should be done with these images, once they are identified? A typical approach would be to discard images based on some fixed threshold. If we knew the optimal approach would be to weight the images by the inverse of their variance. But how do we get V? We can use the residuals from our model: This is a good start, but has some issues regarding bias of our estimator of variance. To improve things, see Diedrichsen and Shadmehr (2005).

11 Weighted Least Squares
“Normal equations” for weighted least squares Weighted LMS

12 Regression with basis functions
In general, predictions can be based on a linear combination of a set of basis functions: basis set: Examples: Linear basis set: Gaussian basis set: Each basis is a local expert. This measures how close are the features of the input to that preferred by expert i. Radial basis set (RBF)

13 Output 1 Collection of experts Input space

14 Regression with basis functions

15 Choice of loss function
-2 -1 1 2 3 4 In learning, our aim is to find parameters w so to minimize the expected loss: Probability density of error, given our model parameters This is a weighted sum. The loss is weighted by the likelihood of observing that loss.

16 Inferring the choice of loss function from behavior
Kording & Wolpert, PNAS (2004) A trial lasted 6 seconds. Over this period, a series of ‘peas’ appeared near the target, drawn from a distribution that depended on the finger position. The object was to “place the finger so that on average, the peas land as close as possible to the target”.

17 The delta loss function
-2 -1 1 2 0.2 0.4 0.6 0.8 -2 -1 1 2 3 4 Loss Imagine that the learner cannot arbitrarily change the density of the errors through learning. All the learner can do is shift the density left or right through setting the parameter w. If the learning uses this loss function: The delta loss function might seem rational because the task that we’re considering is a pea-shooting task. If you hit the target you win, if you don’t hit it you lose, and the loss does not change because you miss by a little or a lot. Then the smallest possible expected value of loss occurs when p(y) has its peak at yerror =0 Therefore, in the above plot choice of w2 is better than w1. In effect, the w that the learner chooses will depend on the exact shape of p(y).

18 Behavior with the delta loss function
1 1.2 0.8 1 0.8 0.6 0.6 0.4 0.4 0.2 0.2 -2 -1 1 2 -2 -1 1 2 0.3 0.4 0.5 0.6 0.7 0.8 -0.05 0.05 0.1 0.15 Suppose the “outside” system (e.g., the teacher) sets r. Given the loss function, we can predict what the best w will be for the learner.

19 Behavior with the squared error loss function
We have a p(ytilda) with a variance that is independent of w. So to minimize E(loss), we should pick a w that produces the smallest E[ytilda]. That happens at a w that sets mean of p(ytilda) equal to zero.

20 Mean and variance of mixtures of normal distributions

21 Kording & Wolpert, PNAS (2004)
delta 0.15 0.1 Typical subjects 0.05 -0.05 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1.2 1 Results: large errors are penalized by less than a squared term. The loss function was estimated at: However, note that the largest errors tended to occur very infrequently in this experiment. 0.8 0.6 0.4 0.2 -2 -1 1 2 (cm)


Download ppt "Mathematical Foundations of BME"

Similar presentations


Ads by Google