Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.

Slides:



Advertisements
Similar presentations
You have been given a mission and a code. Use the code to complete the mission and you will save the world from obliteration…
Advertisements

Feichter_DPG-SYKL03_Bild-01. Feichter_DPG-SYKL03_Bild-02.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. *See PowerPoint Lecture Outline for a complete, ready-made.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 116.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 28.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 38.
By D. Fisher Geometric Transformations. Reflection, Rotation, or Translation 1.
Chapter 1 Image Slides Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Introduction to Algorithms 6.046J/18.401J/SMA5503
Thursday, March 7 Duality 2 – The dual problem, in general – illustrating duality with 2-person 0-sum game theory Handouts: Lecture Notes.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
An Investigation on FPGA Placement Using Mixed Genetic Algorithm with Simulated Annealing Meng Yang Napier University Edinburgh, UK.
ZMQS ZMQS
The Maximum Likelihood Method
Break Time Remaining 10:00.
Pearls of Functional Algorithm Design Chapter 1 1 Roger L. Costello June 2011.
ABC Technology Project
Environmental Data Analysis with MatLab Lecture 10: Complex Fourier Series.
15. Oktober Oktober Oktober 2012.
Quadratic Inequalities
Environmental Data Analysis with MatLab
Squares and Square Root WALK. Solve each problem REVIEW:
We are learning how to read the 24 hour clock
Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
SIMOCODE-DP Software.
GG Consulting, LLC I-SUITE. Source: TEA SHARS Frequently asked questions 2.
Addition 1’s to 20.
25 seconds left…...
Week 1.
We will resume in: 25 Minutes.
Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
Clock will move after 1 minute
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Simple Linear Regression Analysis
Multiple Regression and Model Building
Select a time to count down from the clock above
Murach’s OS/390 and z/OS JCLChapter 16, Slide 1 © 2002, Mike Murach & Associates, Inc.
Environmental Data Analysis with MatLab Lecture 21: Interpolation.
Environmental Data Analysis with MatLab Lecture 15: Factor Analysis.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Presentation transcript:

Lecture 10 Nonuniqueness and Localized Averages

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empirical Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture Show that null vectors are the source of nonuniqueness Show why some localized averages of model parameters are unique while others aren’t Show how nonunique averages can be bounded using prior information on the bounds of the underlying model parameters Introduce the Linear Programming Problem

Part 1 null vectors as the source of nonuniqueness in linear inverse problems

suppose two different solutions exactly satisfy the same data since there are two the solution is nonunique

then the difference between the solutions satisfies

the quantity m null = m (1) – m (2) is called a null vector it satisfies G m null = 0

an inverse problem can have more than one null vector m null(1) m null(2) m null(3)... any linear combination of null vectors is a null vector αm null(1) + βm null(2) +γm null(3) is a null vector for any α, β, γ

suppose that a particular choice of model parameters m par satisfies G m par =d obs with error E

then has the same error E for any choice of α i

since e = d obs -Gm gen = d obs -Gm par + Σ i α i 0

since since α i is arbitrary the solution is nonunique

hence an inverse problem is nonunique if it has null vectors

Gm example consider the inverse problem a solution with zero error is m par =[d 1, d 1, d 1, d 1 ] T

the null vectors are easy to work out note thattimes any of these vectors is zero

the general solution to the inverse problem is

Part 2 Why some localized averages are unique while others aren’t

let’s denote a weighted average of the model parameters as = a T m where a is the vector of weights

a may or may not be “localized”

a = [0.25, 0.25, 0.25, 0.25] T a = [0. 90, 0.07, 0.02, 0.01] T not localized localized near m 1 examples

now compute the average of the general solution

if this term is zero for all i, then does not depend on α i, so average is unique

an average =a T m is unique if the average of all the null vectors is zero

if we just pick an average out of the hat because we like it... its nicely localized chances are that it will not zero all the null vectors so the average will not be unique

relationship to model resolution R

a T is a linear combination of the rows of the data kernel G

if we just pick an average out of the hat because we like it... its nicely localized its not likely that it can be built out of the rows of G so it will not be unique

suppose we pick a average that is not unique is it of any use?

Part 3 bounding localized averages even though they are nonunique

we will now show if we can put weak bounds on m they may translate into stronger bounds on

example with so

example with so nonunique

but suppose m i is bounded 0 > m i > 2d 1 smallest α 3 = -d 1 largest α 3 = +d 1

(2/3) d 1 > > (4/3)d 1 smallest α 3 = -d 1 largest α 3 = +d 1

(2/3) d 1 > > (4/3)d 1 smallest α 3 = -d 1 largest α 3 = +d 1 bounds on tighter than bounds on m i

the question is how to do this in more complicated cases

Part 4 The Linear Programming Problem

the Linear Programming problem

flipping sign switches minimization to maximization flipping signs of A and b switches to ≥

in Business unit profit quantity of each product profit maximizes no negative production physical limitations of factory government regulations etc care about both profit z and product quantities x

in our case a m bounds on m not needed Gm=d first minimize then maximize care only about, not m

In MatLab

Example 1 simple data kernel one datum sum of m i is zero bounds |m i | ≤ 1 average unweighted average of K model parameters

K bounds on absolute value of weighted average

K if you know that the sum of 20 things is zero and if you know that the things are bounded by ± 1 then you know the sum of 19 of the things is bounded by about ± 0.1

K bounds on absolute value of weighted average for K>10 has tigher bounds than m i

Example 2 more complicated data kernel d k weighted average of first 5k/2 m ’s bounds 0 ≤ m i ≤ 1 average localized average of 5 neighboring model parameters

Gm true m i (z i ) depth, z i width, w (A) (B) ≈ d obs j i j i

Gm true m i (z i ) depth, z i width, w (A) (B) ≈ d obs j i j i complicated G but reminiscent of Laplace Transform kernel

Gm true m i (z i ) depth, z i width, w (A) (B) ≈ d obs j i j i true m i increased with depth z i

Gm true m i (z i ) depth, z i width, w (A) (B) ≈ d obs j i j i minimum length solution

Gm true m i (z i ) depth, z i width, w (A) (B) ≈ d obs j i j i lower bound on solution upper bound on solution

Gm true m i (z i ) depth, z i width, w (A) (B) ≈ d obs j i j i lower bound on average upper bound on average