Homework 2 Let us simulate the savings experiment of Kojima et al. (2004) assuming that the learner models the hidden state of the world with a 3x1 vector.

Slides:



Advertisements
Similar presentations
Internal models, adaptation, and uncertainty
Advertisements

State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Weighted Least Squares Regression Dose-Response Study for Rosuvastin in Japanese Patients with High Cholesterol "Randomized Dose-Response Study of Rosuvastin.
Discovering Cyclic Causal Models by Independent Components Analysis Gustavo Lacerda Peter Spirtes Joseph Ramsey Patrik O. Hoyer.
Kriging.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Observers and Kalman Filters
Inferring Hand Motion from Multi-Cell Recordings in Motor Cortex using a Kalman Filter Wei Wu*, Michael Black †, Yun Gao*, Elie Bienenstock* §, Mijail.
Visual Tracking: Case Study CMPUT 615 Nilanjan Ray.
Lecture 11: Recursive Parameter Estimation
458 More on Model Building and Selection (Observation and process error; simulation testing and diagnostics) Fish 458, Lecture 15.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
MACHINE LEARNING 6. Multivariate Methods 1. Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Motivating Example  Loan.
Tutorial Gauss-Jordan elimination. 1. Calculate the inverse matrix.
Decimals and Fractions
Adaptive Signal Processing
Statistical learning and optimal control:
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Example Clustered Transformations MAP Adaptation Resources: ECE 7000:
Learning Theory Reza Shadmehr Bayesian Learning 2: Gaussian distribution & linear regression Causal inference.
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Satellite Tracking Example of SNC and DMC ASEN.
Conditioned Inhibition CS B CS C clicks Conditioned inhibition is an internal state that prevents an organism from making some response, like salivation.
Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted.
Chapter 1 Section 1.1 Introduction to Matrices and systems of Linear Equations.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics Instructor Longin Jan Latecki C22: The Method of Least Squares.
- State Observers for Linear Systems Conventional Asymptotic Observers Observer equation Any desired spectrum of A+LC can be assigned Reduced order.
Young Ki Baik, Computer Vision Lab.
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
Learning Theory Reza Shadmehr State estimation theory.
Statistical learning and optimal control: A framework for biological learning and motor control Lecture 2: Models of biological learning and sensory- motor.
Statistical learning and optimal control: A framework for biological learning and motor control Lecture 4: Stochastic optimal control Reza Shadmehr Johns.
Learning Theory Reza Shadmehr Optimal feedback control stochastic feedback control with and without additive noise.
Day 2 Eigenvectors neither stretched nor compressed, its eigenvalue is 1. All vectors with the same vertical direction—i.e., parallel to this vector—are.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
A Trust Based Distributed Kalman Filtering Approach for Mode Estimation in Power Systems Tao Jiang, Ion Matei and John S. Baras Institute for Systems Research.
State Space Control of a Magnetic Suspension System Margaret Glavin Supervisor: Prof. Gerard Hurley.
A comparison of methods for characterizing the event-related BOLD timeseries in rapid fMRI John T. Serences.
Line of Best Fit 4.2 A. Goal Understand a scatter plot, and what makes a line a good fit to data.
An Introduction to Kalman Filtering by Arthur Pece
Chapter 22: Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
Simulation Study for Longitudinal Data with Nonignorable Missing Data Rong Liu, PhD Candidate Dr. Ramakrishnan, Advisor Department of Biostatistics Virginia.
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
1.7 Linear Independence. in R n is said to be linearly independent if has only the trivial solution. in R n is said to be linearly dependent if there.
is a linear combination of and depends upon and is called a DEPENDENT set.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION EKF and Observability ASEN 5070 LECTURE 23 10/21/09.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Regularization of energy-based representations Minimize total energy E p (u) + (1- )E d (u,d) E p (u) : Stabilizing function - a smoothness constraint.
Adiabatic Quantum Computing Josh Ball with advisor Professor Harsh Mathur Problems which are classically difficult to solve may be solved much more quickly.
4.1 Exponential Functions I can write an exponential equation from an application problem I can use an exponential equation to solve a problem.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2015 Professor Brandon A. Jones Lecture 15: Statistical Least Squares.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
ASEN 5070: Statistical Orbit Determination I Fall 2014
Does the set S SPAN R3 ?.
Hidden Markov chain models (state space model)
Homework 1 (parts 1 and 2) For the general system described by the following dynamics: We have the following algorithm for generating the Kalman gain and.
Filtering and State Estimation: Basic Concepts
Kalman Filter فيلتر كالمن در سال 1960 توسط R.E.Kalman در مقاله اي تحت عنوان زير معرفي شد. “A new approach to liner filtering & prediction problem” Transactions.
Principal Component Analysis (PCA)
3.3 Experimental Standard Deviation
State Space Models.
Neural Network Training
1) Write the vector in component form.
Kalman Filter: Bayes Interpretation
Topic 11: Matrix Approach to Linear Regression
Presentation transcript:

Homework 2 Let us simulate the savings experiment of Kojima et al. (2004) assuming that the learner models the hidden state of the world with a 3x1 vector that takes on a slow, medium, and fast time scales. As before, we assume that the learner makes the following assumptions: 1. The world has many hidden states. What I observe is a linear combination of these states. 2. The hidden states change from trial to trial. Some change slowly, others change fast. 3. The states that change fast have larger noise than states that change slow. A The learner’s model of the world

Homework 2 (continued) To simulate the Kojima et al. (2004) experiment, we will provide the learner with data y(n) that initially is at zero, then jumps up to 1, then is brought down to -1 until the learner’s estimate is back to zero, and then we check for savings by bringing y(n) back up to 1 (see below). Using the Kalman filter approach, simulate the learner’s performance using the initial conditions given in the last slide. Plot the y and yhat as a function of trial number. (yhat in a given trial is the current estimate of w times x). Is there savings? Plot w1, w2, and w3. Plot the diagonal elements of the uncertainty matrix. By end of training, which hidden state has the highest uncertainty? The data the experimenter provides to the learner: