Errors in Error Variance Prediction and Ensemble Post-Processing Elizabeth Satterfield 1, Craig Bishop 2 1 National Research Council, Monterey, CA, USA;

Slides:



Advertisements
Similar presentations
Multi-model ensemble post-processing and the replicate Earth paradigm (Manuscript available on-line in Climate Dynamics) Craig H. Bishop Naval Research.
Advertisements

Data-Assimilation Research Centre
Variational data assimilation and forecast error statistics
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Sampling: Final and Initial Sample Size Determination
Initialization Issues of Coupled Ocean-atmosphere Prediction System Climate and Environment System Research Center Seoul National University, Korea In-Sik.
Effects of model error on ensemble forecast using the EnKF Hiroshi Koyama 1 and Masahiro Watanabe 2 1 : Center for Climate System Research, University.
CHOICES FOR HURRICANE REGIONAL ENSEMBLE FORECAST (HREF) SYSTEM Zoltan Toth and Isidora Jankov.
Observers and Kalman Filters
A quick introduction to the analysis of questionnaire data John Richardson.
Evaluating Hypotheses
Review Chapter 1-3. Exam 1 25 questions 50 points 90 minutes 1 attempt Results will be known once the exam closes for everybody.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Statistical inference Population - collection of all subjects or objects of interest (not necessarily people) Sample - subset of the population used to.
A comparison of hybrid ensemble transform Kalman filter(ETKF)-3DVAR and ensemble square root filter (EnSRF) analysis schemes Xuguang Wang NOAA/ESRL/PSD,
The Lognormal Distribution
Lecture II-2: Probability Review
Standard error of estimate & Confidence interval.
Comparison of hybrid ensemble/4D- Var and 4D-Var within the NAVDAS- AR data assimilation framework The 6th EnKF Workshop May 18th-22nd1 Presenter: David.
© Copyright McGraw-Hill CHAPTER 6 The Normal Distribution.
Machine Learning Queens College Lecture 3: Probability and Statistics.
The Campbell Collaborationwww.campbellcollaboration.org Introduction to Robust Standard Errors Emily E. Tanner-Smith Associate Editor, Methods Coordinating.
The horseshoe estimator for sparse signals CARLOS M. CARVALHO NICHOLAS G. POLSON JAMES G. SCOTT Biometrika (2010) Presented by Eric Wang 10/14/2010.
Ensemble Data Assimilation and Uncertainty Quantification Jeffrey Anderson, Alicia Karspeck, Tim Hoar, Nancy Collins, Kevin Raeder, Steve Yeager National.
Error Analysis Accuracy Closeness to the true value Measurement Accuracy – determines the closeness of the measured value to the true value Instrument.
EnKF Overview and Theory
Nonlinear Data Assimilation and Particle Filters
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss High-resolution data assimilation in COSMO: Status and.
Probabilistic Robotics Bayes Filter Implementations.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
Lab 3b: Distribution of the mean
Model dependence and an idea for post- processing multi-model ensembles Craig H. Bishop Naval Research Laboratory, Monterey, CA, USA Gab Abramowitz Climate.
Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.
Craig H. Bishop Elizabeth A Satterfield Kevin T. Shanley, David Kuhl, Tom Rosmond, Justin McLay and Nancy Baker Naval Research Laboratory Monterey CA November.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Sabrina Rainwater David National Research Council Postdoc at NRL with Craig Bishop and Dan Hodyss Naval Research Laboratory Multi-scale Covariance Localization.
Problem: 1) Show that is a set of sufficient statistics 2) Being location and scale parameters, take as (improper) prior and show that inferences on ……
Confidence Interval & Unbiased Estimator Review and Foreword.
Bayes Theorem The most likely value of x derived from this posterior pdf therefore represents our inverse solution. Our knowledge contained in is explicitly.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
CY3A2 System identification Input signals Signals need to be realisable, and excite the typical modes of the system. Ideally the signal should be persistent.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
A Random Subgrouping Scheme for Ensemble Kalman Filters Yun Liu Dept. of Atmospheric and Oceanic Science, University of Maryland Atmospheric and oceanic.
Pg 1 SAMSI UQ Workshop; Sep Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
The Ensemble Kalman filter
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Hybrid Data Assimilation
Data Assimilation Research Testbed Tutorial
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
Probability Theory and Parameter Estimation I
Probabilistic Robotics
Observation Informed Generalized Hybrid Error Covariance Models
Lecture 10: Observers and Kalman Filters
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Error statistics in data assimilation
Biointelligence Laboratory, Seoul National University
Bayes and Kalman Filter
Hartmut Bösch and Sarah Dance
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions July chonbuk national university.
Sampling Distributions
Kalman Filter: Bayes Interpretation
Presentation transcript:

Errors in Error Variance Prediction and Ensemble Post-Processing Elizabeth Satterfield 1, Craig Bishop 2 1 National Research Council, Monterey, CA, USA; 2 Naval Research Laboratory, Monterey CA, USA Adjust ensemble variances to be consistent with the mean or the mode of the posterior distribution?  This result shows optimal error variance is a combination of a static climatological prediction and a flow dependent prediction: A theoretical justification for Hybrid DA.  Bishop and Satterfield (2012) show how parameters defining the prior and posterior distributions can be deduced from a time series of innovations paired with ensemble variances. Ensemble Post-Processing When all distributions are Gaussian, Bayes’ Theorem gives, What is the value of knowledge of the variance of the variance  Method 1: Invariant -The variance fixed to be the climatological average of forecast error variance. The ensemble prediction of variance is ignored.  Method 2: Naïve -The naïve ensemble prediction of the variance is used. All climatological information is ignored.  Method 3: Informed Gaussian -The mean of the posterior distribution is used. The variable nature of forecast error variance is ignored.  Method 4: Fully Processed -Each ensemble member (j) draws a random variance from the posterior. Ensemble Forecasts  Accurate predictions of forecast error variance are vital to ensemble weather forecasting.  The variance of the ensemble provides a prediction of the error variance of the ensemble mean or, possibly, a high resolution forecast. How Should Ensemble Forecasts Be Post-Processed?  Adjust a raw ensemble variance prediction to a new value that is more consistent with historical data?  Must one attempt to describe the distribution of forecast error variance given an imperfect ensemble prediction? A Simple Model of Innovation Variance Prediction  The “true state” is defined by a random draw from a climatological Gaussian distribution.  The error of the deterministic forecast is defined by a random draw from a Gaussian distribution whose variance is a random draw from a climatological inverse Gamma distribution of error variances. An “Imperfect” Ensemble Prediction  We assume an ensemble variance s 2 is drawn from a Gamma distribution s 2 ~Γ(k,θ) with mean a(ω-R). Define a Posterior Distribution of Error Variances Given An Ensemble Variance Stochastic Sensitivity Prior climatological distribution Distribution of s 2 given ω Climatological mean Naïve ensemble prediction Error variance is not precisely known, but we know that given a forecast and an inaccurate ensemble variance there is an inverse- gamma distribution of possible true innovation variances Weather Roulette Weather Roulette (Hagedorn and Smith, 2008)  Two players, A and B, each open a weather roulette casino  Each start with a $1 bet and reinvest their winnings every round (fully proper variant)  A and B use their forecast to: -set the odds of their own casino ~1/pA(v), 1/pB(v) -distribute their money in the other casino ~ pA(v), pB(v)  They define 100 equally likely climatological bins  Value of the ensemble is interpreted using an effective daily interest rate Results from the Lorenz ’96 Model  We use 10 variable Lorenz ’96 model and Ensemble Transform Kalman Filter (ETKF)  The ETKF ensemble is re-sampled to create a suboptimal M member ensemble to form the error covariance matrix and produce a suboptimal analysis.  Only the full ETKF analysis and analysis ensemble are cycledConclusions  Knowledge of varying error variances is beneficial in ensemble post-processing  Rank Frequency Histograms and the weather roulette effective daily interest rate show improvement when varying error variances are considered  Our ensemble post-processing scheme has proved effective with both synthetic data and data from a long Lorenz ’96 model run. Fully Processed Invariant Naive e Informed Gaussian The effective daily interest rate obtained by a gambler using the FP ensemble in casinos whose odds were set by (a) invariant, (b) naïve,and (c) informed Gaussian ensembles. Columns show the effective number of ensemble members used to generate the variance prediction. Smaller values of M correspond to less accurate predictions of error variance. Values shown represent the mean taken over seven independent trials. M=10 member random sample ensembleM=10 member post-processed ensemble Acknowledgments: CHB gratefully acknowledges support from the U.S. Office of Naval Research Grant# This research was performed while ES held a National Research Council Research Associateship Award Rank frequency histogram obtained from a M=1000 member ensemble by defining 10 bins based on percentile values. The experiments used the parameter values ω’ 2 =10, k=0.5, and a=0.5.