Patrick Tewson University of Washington, Applied Physics Laboratory Local Bayesian Model Averaging for the UW ProbCast Eric P. Grimit, Jeffrey Baars, Clifford.

Slides:



Advertisements
Similar presentations
A Brief Guide to MDL's SREF Winter Guidance (SWinG) Version 1.0 January 2013.
Advertisements

Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass Adrian Raftery, Susan Joslyn, Tilmann Gneiting and others University of Washington.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Creating probability forecasts of binary events from ensemble predictions and prior information - A comparison of methods Cristina Primo Institute Pierre.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Statistical Postprocessing of LM Weather Parameters Ulrich Damrath Volker Renner Susanne Theis Andreas Hense.
Instituting Reforecasting at NCEP/EMC Tom Hamill (ESRL) Yuejian Zhu (EMC) Tom Workoff (WPC) Kathryn Gilbert (MDL) Mike Charles (CPC) Hank Herr (OHD) Trevor.
Toward State-Dependent Moisture Availability Perturbations in a Multi-Analysis Ensemble System with Physics Diversity Eric Grimit.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Statistics, data, and deterministic models NRCSE.
Probabilistic Weather Forecasting Using Bayesian Model Averaging
Update on the Regional Modeling System Cliff Mass, David Ovens, Richard Steed, Mark Albright, Phil Regulski, Jeff Baars, Eric Grimit.
Evaluation of a Mesoscale Short-Range Ensemble Forecasting System over the Northeast United States Matt Jones & Brian A. Colle NROW, 2004 Institute for.
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
19 September :15 PM Ensemble Weather Forecasting Workshop; Val-Morin, QC Canada Toward Short-Range Ensemble Prediction of Mesoscale Forecast Error.
Assessment of Extreme Rainfall in the UK Marie Ekström
The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit 1. Description of the Expanded UW SREF System (How is this thing.
¿How sensitive are probabilistic precipitation forecasts to the choice of the ensemble generation method and the calibration algorithm? Juan J. Ruiz 1,2.
Juan Ruiz 1,2, Celeste Saulo 1,2, Soledad Cardazzo 1, Eugenia Kalnay 3 1 Departamento de Cs. de la Atmósfera y los Océanos (FCEyN-UBA), 2 Centro de Investigaciones.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
NCAR Efficient Production of High Quality, Probabilistic Weather Forecasts F. Anthony Eckel National Weather Service Office of Science and Technology,
Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex Lisa Goddard and Simon Mason International Research Institute for.
Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society (IRI)
SRNWP workshop - Bologne Short range ensemble forecasting at Météo-France status and plans J. Nicolau, Météo-France.
Forecasting wind for the renewable energy market Matt Pocernich Research Applications Laboratory National Center for Atmospheric Research
Nynke Hofstra and Mark New Oxford University Centre for the Environment Trends in extremes in the ENSEMBLES daily gridded observational datasets for Europe.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
It Never Rains But It Pours: Modeling Mixed Discrete- Continuous Weather Phenomena J. McLean Sloughter This work was supported by the DoD Multidisciplinary.
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
CC-07: Atmospheric Carbon Dioxide Concentrations and Weather: Evidence from Hawaii Kevin F. Forbes The Catholic University of America Washington, DC .
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
11th EMS & 10th ECAM Berlin, Deutschland The influence of the new ECMWF Ensemble Prediction System resolution on wind power forecast accuracy and uncertainty.
Alan F. Hamlet Andy Wood Dennis P. Lettenmaier JISAO Center for Science in the Earth System Climate Impacts Group and the Department.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Short-Range Ensemble Prediction System at INM García-Moya, J.A., Santos, C., Escribà, P.A., Santos, D., Callado, A., Simarro, J. (NWPD, INM, SPAIN) 2nd.
Development of an Ensemble Gridded Hydrometeorological Forcing Dataset over the Contiguous United States Andrew J. Newman 1, Martyn P. Clark 1, Jason Craig.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
1Deutscher WetterdienstMärz 2005 April 2005: 19 NWS/ 21 forecast products (1) AustriaALADIN-LACE (9.6 km) ARPEGE (2) Czech Repub ALADIN-LACE (9 km) ARPEGE.
QJ Wang and Andrew Schepen LAND AND WATER CBaM for post-processing seasonal climate forecasts.
Improving Numerical Weather Prediction Using Analog Ensemble Presentation by: Mehdi Shahriari Advisor: Guido Cervone.
University of Washington Ensemble Systems for Probabilistic Analysis and Forecasting Cliff Mass, Atmospheric Sciences University of Washington.
A Northwest Consortium for Regional Climate Modelling
Model Post Processing.
S.Alessandrini, S.Sperati, G.Decimi,
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
10701 / Machine Learning Today: - Cross validation,
SRNWP-PEPS COSMO General Meeting September 2005
Forecast system development activities
Verification of probabilistic forecasts: comparing proper scoring rules Thordis L. Thorarinsdottir and Nina Schuhen
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Presentation transcript:

Patrick Tewson University of Washington, Applied Physics Laboratory Local Bayesian Model Averaging for the UW ProbCast Eric P. Grimit, Jeffrey Baars, Clifford F. Mass University of Washington, Atmospheric Sciences Research supported by: Office of Naval Research Multi-Disciplinary University Research Initiative (MURI)

4 May :50 AMSpring MURI Meeting; Seattle, WA Motivation “As high as 81! Hey, Eric! Those intervals are too wide! And 15% chance of precip? Hmm…”

4 May :50 AMSpring MURI Meeting; Seattle, WA Summary from Last Fall Mean error climatology (MEC): Ensemble-mean + its error variance over some history. Good benchmark to evaluate competing calibration methods. Generally beats the raw ensemble, even though it is not a state- dependent forecast of uncertainty. Local Bayesian model averaging (Local-BMA): Model forecast performance varies locally: BMA parameters should depend on grid point location. Train BMA using elevation, land-use, and proximity constraints. Can consistently beat MEC in tests with grid-based verification.

4 May :50 AMSpring MURI Meeting; Seattle, WA Global-BMA Calibration and Sharpness Global- BMA MEC sharpness MEC BMA FIT calibration Probability integral transform (PIT) histograms  an analog of verification rank histograms for continuous forecasts [00 UTC Cycle; October 2002 – March 2004; 361 cases]

4 May :50 AMSpring MURI Meeting; Seattle, WA Local- BMA Local-BMA Calibration and Sharpness MEC BMA FIT calibration sharpness Probability integral transform (PIT) histograms  an analog of verification rank histograms for continuous forecasts [00 UTC Cycle; October 2002 – March 2004; 361 cases] MEC

4 May :50 AMSpring MURI Meeting; Seattle, WA BMA Forecast Skill Comparison Local-BMA CRPS % improvement over MEC Global-BMA CRPS % improvement over MEC

An Observation-Based Approach to Local-BMA Development and testing: Winter-Spring 2006 Expect it to drive the MURI “killer application”  UW ProbCast. Several “tuning” parameters available, which can hopefully be optimized. Deploy it initially for MAXT2 and MINT2 forecasts. Application to mixed discrete-continuous quantities (e.g., QPF) and 2-D quantities (wind) will require further exploration.

4 May :50 AMSpring MURI Meeting; Seattle, WA An Observation-Based Approach to Local-BMA Allow BMA parameters to vary by grid point. Use observations, remote if necessary, as training data. Follow the Baars et al. procedure for bias correction (optimized from the Mass-Wedam-Steed method) to also select the training data for Local-BMA. For each grid point, search for n (e.g. 8) nearby stations (e.g. within 864-km) at similar elevation (e.g. within 250-m) and having similar land-use. Land-use categories were concatenated into 9 categories (down from 24 in MM5). Figure shows methodology for Mass-Wedam-Steed settings.

4 May :50 AMSpring MURI Meeting; Seattle, WA Maximum 2-m Temperature – Case Study Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, South-north grid point: West-east grid point : obs: F model#:1, forecast: F model#:2, forecast: F model#:3, forecast: F model#:4, forecast: F model#:5, forecast: F model#:6, forecast: F model#:7, forecast: F model#:8, forecast: F ENS-MEAN: F -----

4 May :50 AMSpring MURI Meeting; Seattle, WA Global-BMA Mean Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, South-north grid point: West-east grid point : obs: F model#:1, forecast: F model#:2, forecast: F model#:3, forecast: F model#:4, forecast: F model#:5, forecast: F model#:6, forecast: F model#:7, forecast: F model#:8, forecast: F Global-BMA-MEAN: F -----

4 May :50 AMSpring MURI Meeting; Seattle, WA Local-BMA Mean Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, South-north grid point: West-east grid point : obs: F model#:1, forecast: F model#:2, forecast: F model#:3, forecast: F model#:4, forecast: F model#:5, forecast: F model#:6, forecast: F model#:7, forecast: F model#:8, forecast: F Local-BMA-MEAN: F -----

4 May :50 AMSpring MURI Meeting; Seattle, WA Bias-Corrected Ensemble Mean Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, South-north grid point: West-east grid point : obs: F model#:1, forecast: F model#:2, forecast: F model#:3, forecast: F model#:4, forecast: F model#:5, forecast: F model#:6, forecast: F model#:7, forecast: F model#:8, forecast: F BC-ENS-MEAN: F -----

4 May :50 AMSpring MURI Meeting; Seattle, WA Global-BMA Sharpness Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, South-north grid point: West-east grid point : obs: F Global-BMA-95%: F Global-BMA-MEAN: F Global-BMA- 5%: F -----

4 May :50 AMSpring MURI Meeting; Seattle, WA Local-BMA Sharpness Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, South-north grid point: West-east grid point : obs: F Local-BMA-95%: F Local-BMA-MEAN: F Local-BMA- 5%: F -----

4 May :50 AMSpring MURI Meeting; Seattle, WA Local-MEC Sharpness Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, South-north grid point: West-east grid point : obs: F Local-MEC-95%: F Local-MEC-MEAN: F Local-MEC- 5%: F -----

4 May :50 AMSpring MURI Meeting; Seattle, WA Calibration (all stations)

4 May :50 AMSpring MURI Meeting; Seattle, WA Calibration (water only)

4 May :50 AMSpring MURI Meeting; Seattle, WA Sharpness (all stations)

4 May :50 AMSpring MURI Meeting; Seattle, WA Sharpness (water only)

4 May :50 AMSpring MURI Meeting; Seattle, WA Minimum 2-m Temperature – Same Story (all stations) (water only) (calibration) (sharpness)

4 May :50 AMSpring MURI Meeting; Seattle, WA Continuous Ranked Probability Scores (all stations) (water only) (MAXT2) (MINT2)

4 May :50 AMSpring MURI Meeting; Seattle, WA Next Steps Go operational with Local-BMA for MAXT2 and MINT2. Code almost ready. Some issues remaining with “blank” grid points. Parameter optimization? Work on precip next (PoP & PQPF). Issues with small training samples and precip. What if all zeroes? Probably need to modify the search parameters. Distance to crest? Up-slope / down-slope? Depends on terrain gradient and wind! Wind (2-D vector). Established methods for wind speed and direction, separately. Use gamma and Von Mises mixture distributions, respectively. Need to build an EM-like algorithm or employ CRPS (energy score) minimization for 2-D wind forecasts. Work is being done on the CRPS (energy score) for 2-D variables. [statistics]

QUESTIONS and DISCUSSION

4 May :50 AMSpring MURI Meeting; Seattle, WA

4 May :50 AMSpring MURI Meeting; Seattle, WA

4 May :50 AMSpring MURI Meeting; Seattle, WA FIT MEC MEC Performance with Grid-Based Verification CRPS = continuous ranked probability score [Probabilistic analog of the mean absolute error (MAE) for scoring deterministic forecasts] Comparison of *UWME 48-h 2-m temperature forecasts: Member-specific mean bias correction applied to both [14-day running mean] FIT = Gaussian fit to the raw forecast ensemble MEC = Gaussian fit to the ensemble-mean + the mean error climatology [00 UTC Cycle; October 2002 – March 2004; 361 cases]

4 May :50 AMSpring MURI Meeting; Seattle, WA MEC Local-BMA Forecast Performance BMA After several attempts to implement BMA with local or regional training data, EXCELLENT results were achieved: when the training data is selected from a neighborhood* of grid points with similar land-use type and elevation Example application to 48-h 2-m temperature forecasts uses only 14 training days. Dramatic improvements in CRPS nearly everywhere. *neighbors have same land use type and elevation difference < 200 m within a search radius of 3 grid points (60 km)

4 May :50 AMSpring MURI Meeting; Seattle, WA An Advanced Calibration Method BMA has several advantages over MEC: A time-varying uncertainty forecast. A way to keep multi-modality, if it is warranted. Maximizes information from short (2-4 week) training periods. Allows for different relative skill between members through the BMA weights (multi-model, multi-scheme physics). Bayesian Model Averaging (BMA) Summary Member-specific mean-bias correction parameters Member-specific BMA weights BMA variance (not-member specific here, but can be) [c.f. Raftery et al. 2005, Mon. Wea. Rev.]

4 May :50 AMSpring MURI Meeting; Seattle, WA For quantities such as wind speed and precipitation, distributions are not only non-Gaussian, but not purely continuous – there are point masses at zero. For probabilistic quantitative precipitation forecasts (PQPF): Model P(Y=0) with a logistic regression. Model P(Y>0) with a finite Gamma mixture distribution. Fit Gamma means as a linear regression of the cubed-root of observation on forecast and an indicator function for no precipitation. Fit Gamma variance parameters and BMA weights by the EM algorithm, with some modifications. Extending BMA to Non-Gaussian Variables [c.f. Sloughter et al. 200x, manuscript in preparation]