18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European.

Slides:



Advertisements
Similar presentations
GEO Energy Management Meeting WMO, August 2006 Earth Observations and Energy Management Expert Meeting Renate Hagedorn European Centre for Medium-Range.
Advertisements

A unified linear Model Output Statistics scheme for both deterministic and ensemble forecasts S. Vannitsem Institut Royal Météorologique de Belgique Monterey,
Calibration of EPS’s Renate Hagedorn
LRF Training, Belgrade 13 th - 16 th November 2013 © ECMWF Sources of predictability and error in ECMWF long range forecasts Tim Stockdale European Centre.
ECMWF long range forecast systems
The THORPEX Interactive Grand Global Ensemble (TIGGE) Richard Swinbank, Zoltan Toth and Philippe Bougeault, with thanks to the GIFS-TIGGE working group.
6th WMO tutorial Verification Martin GöberContinuous 1 Good afternoon! नमस्कार नमस्कार Guten Tag! Buenos dias! до́брый день! до́брыйдень Qwertzuiop asdfghjkl!
Slide 1 TECO on the WIS, Seoul, 6-8 November 2006 Slide 1 TECO on the WIS: Stakeholder Session THORPEX and TIGGE Walter Zwieflhofer ECMWF.
Statistical post-processing using reforecasts to improve medium- range renewable energy forecasts Tom Hamill and Jeff Whitaker NOAA Earth System Research.
1 A discussion of the use of reforecasts Tom Hamill and Jeff Whitaker NOAA/ESRL, Physical Sciences Division (303)
Instituting Reforecasting at NCEP/EMC Tom Hamill (ESRL) Yuejian Zhu (EMC) Tom Workoff (WPC) Kathryn Gilbert (MDL) Mike Charles (CPC) Hank Herr (OHD) Trevor.
2010 Dec 26 Blizzard – Using ensemble sensitivity analyses to examine forecast jump between Dec 24 and Dec 25 Sensitivity calculations performed by Minghua.
1 Systematic and Random Errors in Operational Forecasts by the UK Met Office Global Model Tim Hewson Met Office Exeter, England Currently at SUNY, Albany.
The NCEP operational Climate Forecast System : configuration, products, and plan for the future Hua-Lu Pan Environmental Modeling Center NCEP.
A statistical downscaling procedure for improving multi-model ensemble probabilistic precipitation forecasts Tom Hamill ESRL Physical Sciences Division.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD Istvan Szunyogh representing the UMD “Chaos-Weather” Group Ensemble.
Introduction to Numerical Weather Prediction and Ensemble Weather Forecasting Tom Hamill NOAA-CIRES Climate Diagnostics Center Boulder, Colorado USA.
¿How sensitive are probabilistic precipitation forecasts to the choice of the ensemble generation method and the calibration algorithm? Juan J. Ruiz 1,2.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
1 Assessment of the CFSv2 real-time seasonal forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
Performance of the MOGREPS Regional Ensemble
COSMO General Meeting – Moscow Sept 2010 Some results from operational verification in Italy Angela Celozzi - Federico Grazzini Massimo Milelli -
Notes on reforecasting and the computational capacity needed for future SREF systems Tom Hamill NOAA Earth System Research Lab presentation for 2009 National.
DEMETER Taiwan, October 2003 Development of a European Multi-Model Ensemble System for Seasonal to Interannual Prediction   DEMETER Noel Keenlyside,
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
EUROBRISA Workshop – Beyond seasonal forecastingBarcelona, 14 December 2010 INSTITUT CATALÀ DE CIÈNCIES DEL CLIMA Beyond seasonal forecasting F. J. Doblas-Reyes,
Page 1 Pacific THORPEX Predictability, 6-7 June 2005© Crown copyright 2005 The THORPEX Interactive Grand Global Ensemble David Richardson Met Office, Exeter.
Verification Approaches for Ensemble Forecasts of Tropical Cyclones Eric Gilleland, Barbara Brown, and Paul Kucera Joint Numerical Testbed, NCAR, USA
EUROBRISA WORKSHOP, Paraty March 2008, ECMWF System 3 1 The ECMWF Seasonal Forecast System-3 Magdalena A. Balmaseda Franco Molteni,Tim Stockdale.
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Notes on reforecasting and the computational capacity needed for future SREF systems Tom Hamill NOAA Earth System Research Lab presentation for 2009 National.
1 Agenda Topic: National Blend Presented By: Kathryn Gilbert (NWS/NCEP) Team Leads: Dave Myrick, David Ruth (NWS/OSTI/MDL), Dave Novak (NCEP/WPC), Jeff.
Revisit Lorenz 1982 using ECMWF and NCEP EPS Ting-Chi Wu MPO674 Predictability Final Project 05/03/2012.
1 Using reforecasts for probabilistic forecast calibration Tom Hamill & Jeff Whitaker NOAA Earth System Research Lab, Boulder, CO NOAA.
Seasonal forecasting from DEMETER to ENSEMBLES21 July 2009 Seasonal Forecasting From DEMETER to ENSEMBLES Francisco J. Doblas-Reyes ECMWF.
MODEL OUTPUT STATISTICS (MOS) TEMPERATURE FORECAST VERIFICATION JJA 2011 Benjamin Campbell April 24,2012 EAS 4480.
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Cost Efficient Use of COSMO-LEPS Reforecasts Felix Fundel,
Verification of Global Ensemble Forecasts Fanglin Yang Yuejian Zhu, Glenn White, John Derber Environmental Modeling Center National Centers for Environmental.
. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Local Probabilistic Weather Predictions for Switzerland.
Lennart Bengtsson ESSC, Uni. Reading THORPEX Conference December 2004 Predictability and predictive skill of weather systems and atmospheric flow patterns.
CBRFC Stakeholder Forum February 24, 2014 Ashley Nielson Kevin Werner NWS Colorado Basin River Forecast Center 1 CBRFC Forecast Verification.
Slide 1 GO-ESSP Paris. June 2007 Slide 1 (TIGGE and) the EU Funded BRIDGE project Baudouin Raoult Head of Data and Services Section ECMWF.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
Exploring Multi-Model Ensemble Performance in Extratropical Cyclones over Eastern North America and the Western Atlantic Ocean Nathan Korfe and Brian A.
Short Range Ensemble Prediction System Verification over Greece Petroula Louka, Flora Gofa Hellenic National Meteorological Service.
General Meeting Moscow, 6-10 September 2010 High-Resolution verification for Temperature ( in northern Italy) Maria Stefania Tesini COSMO General Meeting.
1 Malaquias Peña and Huug van den Dool Consolidation of Multi Method Forecasts Application to monthly predictions of Pacific SST NCEP Climate Meeting,
Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.
NCEP CMC ECMWF MEAN ANA BRIAN A COLLE MINGHUA ZHENG │ EDMUND K. CHANG Applying Fuzzy Clustering Analysis to Assess Uncertainty and Ensemble System Performance.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
1 An Assessment of the CFS real-time forecasts for Wanqiu Wang, Mingyue Chen, and Arun Kumar CPC/NCEP/NOAA.
Figures from “The ECMWF Ensemble Prediction System”
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
Precipitation Products Statistical Techniques
Predictability of 2-m temperature
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
Verification of multi-model ensemble forecasts using the TIGGE dataset
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
The Importance of Reforecasts at CPC
Deterministic (HRES) and ensemble (ENS) verification scores
Xiefei Zhi, Yongqing Bai, Chunze Lin, Haixia Qi, Wen Chen
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Presentation transcript:

18 September 2009: On the value of reforecasts for the TIGGE database 1/27 On the value of reforecasts for the TIGGE database Renate Hagedorn European Centre for Medium-Range Weather Forecasts Tom Hamill NOAA/ESRL/PSD

18 September 2009: On the value of reforecasts for the TIGGE database 2/27 Motivation One goal of TIGGE is to investigate whether multi-model predictions are an improvement to single model forecasts The goal of using reforecasts to calibrate single model forecasts is to provide improved predictions Questions:  What are the relative benefits (costs) of both approaches?  What is the mechanism behind the improvements?  Which is the “better” approach?

18 September 2009: On the value of reforecasts for the TIGGE database 3/27 Possible verification datasets If we don’t verify against model independent observations we need to agree on a ‘fair’ but also ‘most useful’ verification dataset Use each model’s own analysis as verification  Multi-model has no “own analysis”  Intercomparison of skill scores “difficult” because reference forecast scores differently for different analysis Use a multi-model analysis as verification  Incorporating less accurate analyses might not necessarily lead to an analysis which is closest to reality  Calibration needs a consistent verification dataset used in both training and application phase, MM-analysis not available for reforecast training period Use “semi-independent” analysis: ERA-interim  Assumed to be as close as possible to reality  Available for long period in the past and near real-time  For upper air fields in Extra-Tropics close to analyses of best models / MM-analysis  For Tropics and near-surface fields use bias-corrected forecasts for ‘fair’ assessment

18 September 2009: On the value of reforecasts for the TIGGE database 4/27 Choice of analysis: upper air, extra-tropics dashed: ERA-interim as verification T-850hPa, DJF 2008/09 Northern Hemisphere (20°N - 90°N) NCEP Met Office ECMWF TIGGE solid: multi-model analysis as verification Using ERA-interim leads to only minor differences, except for short lead times when scores get worse (applies for all models)

18 September 2009: On the value of reforecasts for the TIGGE database 5/27 Choice of analysis: upper air, tropics dashed: ERA-interim as verification NCEP Met Office ECMWF TIGGE solid: multi-model analysis as verification Using ERA-interim worsens scores considerably / less / least for MO / ECMWF / NCEP T-850hPa, DJF 2008/09 Tropics (20°S - 20°N)

18 September 2009: On the value of reforecasts for the TIGGE database 6/27 Choice of analysis: surface dashed: ERA-interim as verification T2m, DJF 2008/09 Northern Hemisphere (20°N - 90°N) NCEP Met Office ECMWF TIGGE solid: multi-model analysis as verification Using ERA-interim worsens scores, in particular at early lead times, more for MO and NCEP, less for ECMWF

18 September 2009: On the value of reforecasts for the TIGGE database 7/27 Choice of analysis: surface, bias-corrected dashed: DMO with ERA-interim as verification T2m, DJF 2008/09 Northern Hemisphere (20°N - 90°N) NCEP Met Office ECMWF TIGGE solid: Bias-Corr. with ERA-interim as verification Bias-correction improves scores, in particular at early lead times, more for MO and NCEP, less for ECMWF

18 September 2009: On the value of reforecasts for the TIGGE database 8/27 Comparing 9 TIGGE models & the MM T-850hPa, DJF 2008/09 NH (20°N - 90°N) DMO vs. ERA-interim Symbols used for significance level vs. MM (1%)

18 September 2009: On the value of reforecasts for the TIGGE database 9/27 Comparing 9 TIGGE models & the MM T-2m, DJF 2008/09 NH (20°N - 90°N) BC vs. ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 10/27 Comparing 4 TIGGE models & the MM T-850hPa, DJF 2008/09 NH (20°N - 90°N) DMO vs. ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 11/27 Comparing 4 TIGGE models & the MM T2m, DJF 2008/09 NH (20°N - 90°N) BC vs. ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 12/27 with: Φ = CDF of standard Gaussian distribution Calibration using reforecasts All calibration methods need a training dataset, containing a number of forecast-observation pairs from the past Non-homogeneous Gaussian Regression (NGR) provides a Gaussian PDF based on the ensemble mean and variance of the raw forecast distribution Calibration process:  Determine optimal calibration coefficients by minimizing CRPS for training dataset  Apply calibration coefficients to determine calibrated PDF from ensemble mean and variance of actual forecast to be calibrated  Create calibrated NGR-ensemble with 51 synthetic members  Combine NGR-ensemble with ‘30-day bias corrected’ forecast ensemble

18 September 2009: On the value of reforecasts for the TIGGE database 13/27 The reforecast dataset NovDecJan

18 September 2009: On the value of reforecasts for the TIGGE database 14/27 The reforecast dataset NovDecJan

18 September 2009: On the value of reforecasts for the TIGGE database 15/27 Comparing 4 TIGGE models, MM, EC-CAL 2m Temperature, DJF 2008/09 NH (20°N - 90°N) BC & refc-cali vs. ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 16/27 Comparing 4 TIGGE models, MM, EC-CAL 2m Temperature, DJF 2008/09 EU (35°N-75°N, 12.5°E-42.5°W) BC & refc-cali vs. ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 17/27 Comparing 4 TIGGE models, MM, EC-CAL MSLP, DJF 2008/09 NH (20°N - 90°N) BC & refc-cali vs. ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 18/27 Comparing 4 TIGGE models, MM, EC-CAL T-850hPa, DJF 2008/09 NH (20°N - 90°N) DMO & refc-cali vs. ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 19/27 Mechanism behind improvements SPREAD (dash) RMSE (solid) 2m Temperature, DJF 2008/09 Northern Hemisphere (20°N - 90°N) Verification: ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 20/27 Mechanism behind improvements SPREAD (dash) RMSE (solid) 2m Temperature, DJF 2008/09 Northern Hemisphere (20°N - 90°N) Verification: ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 21/27 Mechanism behind improvements SPREAD (dash) RMSE (solid) 2m Temperature, DJF 2008/09 Northern Hemisphere (20°N - 90°N) Verification: ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 22/27 Reduced TIGGE multi-model 2m Temperature, DJF 2008/09 Northern Hemisphere (20°N - 90°N) Verification: ERA-interim CRPS_ref = CRPS (full TIGGE)

18 September 2009: On the value of reforecasts for the TIGGE database 23/27 TIGGE vs. ECMWF vs. EC-CAL 2m Temperature, DJF 2008/09 Northern Hemisphere (20°N - 90°N) Verification: ERA-interim

18 September 2009: On the value of reforecasts for the TIGGE database 24/27 Impact of calibration & MM in EPSgrams 2m Temperature FC: 30/12/2008 ECMWF ECMWF-NGR TIGGE Analysis Monterey London

18 September 2009: On the value of reforecasts for the TIGGE database 25/27 What about station data? (No significance test applied)

18 September 2009: On the value of reforecasts for the TIGGE database 26/27 Relative benefits and costs TIGGE multi-model NGR Calibration using reforecasts Benefits: upper air fields limited Benefits: surface fields Improved scores through reduced systematic error and increased spread Improved scores through reduced systematic error and more appropriate spread Costs: Computational aspects No extra computer time but data transfer costs Moderate increase in computing time (~10%), “for free” if reforecasts are produced for other purposes Costs: Logistic aspects Significantly increased complexity could make system more prone to failures, and timing issues could arise Slight increase in complexity, e.g. when changing model cycles

18 September 2009: On the value of reforecasts for the TIGGE database 27/27 Summary What are the relative benefits (costs) of both approaches?  Both multi-model and reforecast calibration approach can improve predictions, in particular for (biased and under-dispersive) near-surface parameters What is the mechanism behind the improvements?  Both approaches correct similar deficiencies to a similar extent Which is the “better” approach?  On balance, reforecast calibration seems to be the easier option for a reliable provision of forecasts in an operational environment  Both approaches can be useful in achieving the ultimate goal of an optimized, well tuned forecast system