Two Least Squares Applications Data Fitting Noise Suppression.

Slides:



Advertisements
Similar presentations
Subspace Embeddings for the L1 norm with Applications Christian Sohler David Woodruff TU Dortmund IBM Almaden.
Advertisements

Linear Inverse Problems
Linear-time Median Def: Median of elements A=a 1, a 2, …, a n is the (n/2)-th smallest element in A. How to find median? sort the elements, output the.
Signal reconstruction from multiscale edges A wavelet based algorithm.
電腦視覺 Computer and Robot Vision I
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Section 4.2 Fitting Curves and Surfaces by Least Squares.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Development of Empirical Models From Process Data
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Lecture 11 Chemical Reaction Engineering (CRE) is the field that studies the rates and mechanisms of chemical reactions and the design of the reactors.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Linear regression models in matrix terms. The regression function in matrix terms.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Chapter 10 Real Inner Products and Least-Square (cont.)
Normalised Least Mean-Square Adaptive Filtering
Pulse Modulation 1. Introduction In Continuous Modulation C.M. a parameter in the sinusoidal signal is proportional to m(t) In Pulse Modulation P.M. a.
Eigenvectors and Eigenvalues
Systems and Matrices (Chapter5)
1 Failure Correction Techniques for Large Disk Array Garth A. Gibson, Lisa Hellerstein et al. University of California at Berkeley.
Cs: compressed sensing
AN ORTHOGONAL PROJECTION
2.5 The Gauss-Jordan Method for Calculating Inverses Finding Inverses When the matrix is a 2 x 2, the inverse is easy to find using the determinant. What.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
DIGITAL COMMUNICATIONS Linear Block Codes
Linear Algebra in a Computational Setting Alan Kaylor Cline DS September 24, 2014.
LECTURE 3: ANALYSIS OF EXPERIMENTAL DATA
4.5 Inverse of a Square Matrix
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Copyright ©2015 Pearson Education, Inc. All rights reserved.
Linear Algebra in a Computational Setting Alan Kaylor Cline Dean’s Scholars Seminar January 22, 2014.
Problems in solving generic AX = B Case 1: There are errors in data such that data cannot be fit perfectly (analog: simple case of fitting a line while.
Slide Copyright © 2009 Pearson Education, Inc. 7.4 Solving Systems of Equations by Using Matrices.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Reducing a Set Covering Matrix. S I T E S Cost Areas
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
2.3 MODELING REAL WORLD DATA WITH MATRICES By the end of the section students will be able to add, subtract, and multiply matrices of various sizes. Students.
10.4 Matrix Algebra. 1. Matrix Notation A matrix is an array of numbers. Definition Definition: The Dimension of a matrix is m x n “m by n” where m =
SPECIFIC OUTCOME #1: Determine the value of each square root: a. 9
College Algebra Chapter 6 Matrices and Determinants and Applications
Section 6.1 Systems of Linear Equations
The simple linear regression model and parameter estimation
12-1 Organizing Data Using Matrices
Probability Theory and Parameter Estimation I
Linear Regression.
We will be looking for a solution to the system of linear differential equations with constant coefficients.
4.6 Completing the Square Learning goals
4.6 Completing the Square Learning goals
Algebra II Section 4.5a Complete the Square
Warm-up a. Solve for k: 13 −5
Multi-linear Systems and Invariant Theory
Why do you learn square roots?
NanoBPM Status and Multibunch Mark Slater, Cambridge University
J.-F. Pâris University of Houston
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
10701 / Machine Learning Today: - Cross validation,
Special Problems (Solve)
( ) ( ) ( ) ( ) Matrices Order of matrices
6.1 Introduction to Chi-Square Space
Discrete Least Squares Approximation
Determining Airfoil Self-Noise Levels
3.5 Perform Basic Matrix Operations
Regression and Correlation of Data
Matrices and Determinants
Regression and Correlation of Data
Probabilistic Surrogate Models
Presentation transcript:

Two Least Squares Applications Data Fitting Noise Suppression

How long does it take for this code to run?

After examining the code you believe that the running time depends entirely upon some input parameter n and …

a good model for the running time is Time(n) = a + b·log 2 (n) + c·n + d·n·log 2 (n) + e·n 2 where a, b, c, d, and e are constants but currently unknown.

So you time the code for 30 values of n, and you get these times {(n i,t i )}

If the model was perfect and there were no errors in the timings then for some values a, b, c, d, and e: a + b·log 2 (n i ) + c·n i + d·n i ·log 2 (n i ) +e·n i 2 = t i for i =1,…,30

But the model was not perfect and there were error in the timings So we do not expect to get any values a, b, c, d, and e so that: a + b·log 2 (n i ) + c·n i + d·n i ·log 2 (n i ) +e·n i 2 = t i for i =1,…,30 We will settle for values a, b, c, d, and e so that: a + b·log 2 (n i ) + c·n i + d·n i ·log 2 (n i ) +e·n i 2  t i for i =1,…,30

Our sense of a + b·log 2 (n i ) + c·n i + d·n i ·log 2 (n i ) +e·n i 2  t i for i =1,…,30 Will be to get a, b, c, d, and e so that sum of squares of all of the differences  (a + b·log 2 (n i ) + c·n i + d·n i ·log 2 (n i ) +e·n i 2 - t i ) 2 is minimized over all possible choices of a, b, c, d, and e

We form a 30 by 5 matrix whose rows are 1 log 2 (n i ) n i n i ·log 2 (n i ) n i 2 for i =1,…,30 and a column of length 30 with the timings t i for i =1,…,30

After solving the least squares system to get the best values of a, b, c, d, and e, we plot a + b·log 2 (n) + c·n + d·n·log 2 (n) + e·n 2

An application for noise suppression The intent is to recover a sound wave that has been covered with noise

The matrix is 32,768 by 13 32, The columns are discrete - not continuous (although the plots make them appear continuous because there are so many elements.) Each column is 1/2 second worth of sound samples. The sound is sampled at 65,536 samples per second.

A section of the columns of the matrix

Waveform of an A Major chord

Waveform of a noisy A Major chord This is the right hand side

Waveform of the recovered A Major chord original in blue – recovered in green

3. Waveform of the recovered A Major chord 2. Waveform of a noisy A Major chord 1. Waveform of an A Major chord

Pushing the limits We will make the noise 32 times larger than the chord and see if the chord can still be reconstructed

3. Waveform of the recovered A Major chord 2. Waveform of a very noisy A Major chord 1. Waveform of an A Major chord