Team 2: Final Report Uncertainty Quantification in Geophysical Inverse Problems.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
1 ECE 776 Project Information-theoretic Approaches for Sensor Selection and Placement in Sensor Networks for Target Localization and Tracking Renita Machado.
Bayesian inference of normal distribution
Pattern Recognition and Machine Learning
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
CS479/679 Pattern Recognition Dr. George Bebis
Pattern Recognition and Machine Learning
Kriging.
Bayesian inference “Very much lies in the posterior distribution” Bayesian definition of sufficiency: A statistic T (x 1, …, x n ) is sufficient for 
Supervised Learning Recap
1 Parametric Sensitivity Analysis For Cancer Survival Models Using Large- Sample Normal Approximations To The Bayesian Posterior Distribution Gordon B.
Computer vision: models, learning and inference
Approaches to Data Acquisition The LCA depends upon data acquisition Qualitative vs. Quantitative –While some quantitative analysis is appropriate, inappropriate.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Michel Cuer, Carole Duffet, Benjamin Ivorra and Bijan Mohammadi Université de Montpellier II, France. Quantifying uncertainties in seismic tomography SIAM.
Scalable Information-Driven Sensor Querying and Routing for ad hoc Heterogeneous Sensor Networks Maurice Chu, Horst Haussecker and Feng Zhao Xerox Palo.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Computer vision: models, learning and inference
EE462 MLCV 1 Lecture 3-4 Clustering (1hr) Gaussian Mixture and EM (1hr) Tae-Kyun Kim.
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Algorithm Evaluation and Error Analysis class 7 Multiple View Geometry Comp Marc Pollefeys.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Lecture II-2: Probability Review
GEO7600 Inverse Theory 09 Sep 2008 Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of.
Particle Filtering in Network Tomography
1 SVY207: Lecture 18 Network Solutions Given many GPS solutions for vectors between pairs of observed stations Compute a unique network solution (for many.
Model Inference and Averaging
Soft Sensor for Faulty Measurements Detection and Reconstruction in Urban Traffic Department of Adaptive systems, Institute of Information Theory and Automation,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Example: Bioassay experiment Problem statement –Observations: At each level of dose, 5 animals are tested, and number of death are observed.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
An Introduction to Kalman Filtering by Arthur Pece
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Gaussian Processes For Regression, Classification, and Prediction.
Introducing Error Co-variances in the ARM Variational Analysis Minghua Zhang (Stony Brook University/SUNY) and Shaocheng Xie (Lawrence Livermore National.
Optimal Eye Movement Strategies In Visual Search.
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
1 Information Content Tristan L’Ecuyer. 2 Degrees of Freedom Using the expression for the state vector that minimizes the cost function it is relatively.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Density Estimation in R Ha Le and Nikolaos Sarafianos COSC 7362 – Advanced Machine Learning Professor: Dr. Christoph F. Eick 1.
Monte Carlo Sampling to Inverse Problems Wojciech Dębski Inst. Geophys. Polish Acad. Sci. 1 st Orfeus workshop: Waveform inversion.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Lecture 2. Bayesian Decision Theory
Probability Theory and Parameter Estimation I
Review Problems Matrices
Model Inference and Averaging
Ch3: Model Building through Regression
Linear Mixed Models in JMP Pro
Special Topics In Scientific Computing
Latent Variables, Mixture Models and EM
CSCI 5822 Probabilistic Models of Human and Machine Learning
Statement of the Tomography Problem
Information Based Criteria for Design of Experiments
Predictive distributions
More about Posterior Distributions
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Recognition and Machine Learning
Biointelligence Laboratory, Seoul National University
Introduction to Unfolding
Probabilistic Surrogate Models
Presentation transcript:

Team 2: Final Report Uncertainty Quantification in Geophysical Inverse Problems

Statement of the Tomography Problem Observe the arrival time, given the first order physics Invert for the slowness, or density  (x,z)

Creating Synthetic Data Our ‘unknown’ earth model is a layered model Vertical/Horizontal observations are important! The observations are calculated as line integrals through the synthetic model.

Choose A Model There are many choices for model! Each choice leads to a different solution Each solution can be evaluated for goodness of fit. Haar wavelets provide an easy way to describe a region.

Our Model Has Errors! The travel times do not allow us to reconstruct all the details of the layers. We use covariance matrices are used to measure the uncertainty of the model. –Prior covariance matrix is used to account for the model uncertainty without considering the observed traveltimes. –Posterior covariance is accounts for the observed traveltimes.

Solving the Inverse Problem for a single choice of model

The Prior Distribution “The natural choice for a prior pdf is the distribution that allows for the greatest uncertainty while obeying the constraints imposed by the prior information, and it can be shown that this least informative pdf is the pdf that has maximum entropy” (Jaynes 1968, 1995, Papoulis 1984) From Malinverno, 2000

Prior PDF: Mean Surface

Prior PDF: Uncertainty Window

Colormap Uncertainty Mean

Prior PDF: Composite

The Data Prediction Matrix For each observation, we calculate the same line integral through our wavelet model. These are the columns. The better the model is, the closer these integrals match up with our observations

Data Prediction Matrix

Posterior PDF: 3D Histogram Surfaces The posterior mean is the best estimate for the unknown function. The posterior uncertainty allows us to put error bars on this estimate.

Posterior PDF: Mean Surface

Posterior PDF: Uncertainty Window

Posterior PDF: Composite

Other Results

Solving the Inverse Problem for many choices of model

Finding a Best Model Strategy: Check many models and find which ones fit the data the best! Each model has a neighborhood of hereditary models Optimization algorithm: check neighborhood for better model (a single neighbor is selected at random) Run for a fixed amount of time

Synthetic Data

Prior Mean

Prior Uncertainty

Prior Mean/Uncertainty Comp.

Prior Uncertainty Surfaces

Log Marginal Likelihood

Optimal Decimation

Observed vs. Predicted

Posterior Mean

Posterior Uncertainty

Posterior Mean/Uncertainty