ECMWF WWRP/WMO Workshop on QPF Verification - Prague, 14-16 May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities.

Slides:



Advertisements
Similar presentations
ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Advertisements

Slide 1ECMWF forecast User Meeting -- Reading, June 2006 Verification of weather parameters Anna Ghelli, ECMWF.
Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
F. Grazzini, Forecast Products Users Meeting June 2005 SEVERE WEATHER FORECASTS F. Grazzini, F. Lalaurette.
Severe Weather Forecasts
ECMWF long range forecast systems
14 May 2001QPF Verification Workshop Verification of Probability Forecasts at Points WMO QPF Verification Workshop Prague, Czech Republic May 2001.
1 of Introduction to Forecasts and Verification.
Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
Verification of probability and ensemble forecasts
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Statistical Postprocessing of LM Weather Parameters Ulrich Damrath Volker Renner Susanne Theis Andreas Hense.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Evaluation of Potential Performance Measures for the Advanced Hydrologic Prediction Service Gary A. Wick NOAA Environmental Technology Laboratory On Rotational.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Performance of the MOGREPS Regional Ensemble
© Crown copyright Met Office Operational OpenRoad verification Presented by Robert Coulson.
Verification of ensembles Courtesy of Barbara Brown Acknowledgments: Tom Hamill, Laurence Wilson, Tressa Fowler Copyright UCAR 2012, all rights reserved.
Recent developments in seasonal forecasting at French NMS Michel Déqué Météo-France, Toulouse.
WWOSC 2014, Aug 16 – 21, Montreal 1 Impact of initial ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Data mining in the joint D- PHASE and COPS archive Mathias.
WORKSHOP ON SHORT-RANGE ENSEMBLE PREDICTION USING LIMITED-AREA MODELS Instituto National de Meteorologia, Madrid, 3-4 October 2002 Limited-Area Ensemble.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
Verification of the distributions Chiara Marsigli ARPA-SIM - HydroMeteorological Service of Emilia-Romagna Bologna, Italy.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
Improving Ensemble QPF in NMC Dr. Dai Kan National Meteorological Center of China (NMC) International Training Course for Weather Forecasters 11/1, 2012,
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Plans for Short-Range Ensemble Forecast at INM José A. García-Moya SMNT – INM Workshop on Short Range Ensemble Forecast Madrid, October,
Predicted Rainfall Estimation in the Huaihe River Basin Based on TIGGE Fuyou Tian, Dan Qi, Jingyue Di, and Linna Zhao National Meteorological Center of.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss A more reliable COSMO-LEPS F. Fundel, A. Walser, M. A.
Probabilistic Forecasts of Extreme Precipitation Events for the U.S. Hazards Assessment Kenneth Pelman 32 nd Climate Diagnostics Workshop Tallahassee,
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Applying Ensemble Probabilistic Forecasts in Risk-Based Decision Making Hui-Ling Chang 1, Shu-Chih Yang 2, Huiling Yuan 3,4, Pay-Liam Lin 2, and Yu-Chieng.
Deutscher Wetterdienst Preliminary evaluation and verification of the pre-operational COSMO-DE Ensemble Prediction System Susanne Theis Christoph Gebhardt,
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Common verification methods for ensemble forecasts
Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Verification methods - towards a user oriented verification The verification group.
Predicting Intense Precipitation Using Upscaled, High-Resolution Ensemble Forecasts Henrik Feddersen, DMI.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
LEPS VERIFICATION ON MAP CASES
Multi-scale validation of high resolution precipitation products
Verifying and interpreting ensemble products
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
Post Processing.
Probabilistic forecasts
Application of a global probabilistic hydrologic forecast system to the Ohio River Basin Nathalie Voisin1, Florian Pappenberger2, Dennis Lettenmaier1,
Verification of COSMO-LEPS and coupling with a hydrologic model
Deterministic (HRES) and ensemble (ENS) verification scores
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
the performance of weather forecasts
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities of Precipitation Value Extreme Events François Lalaurette, ECMWF

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic Verification Deterministic:  one cause (the weather today - the analysis),  one effect (the weather in n days - the forecast) Verification of the forecast using observations  categorical (e.g. verify events when daily rainfall > 50mm)  continuous (needs a definition or norm for errors) » - e.g. (RR forec.-RR obs ) 2 (Root Mean Square Errors)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic Verification: Biases Bias=mean(observation-forecast) Diurnal cycle (too much convective rain by 12h, too little by 00h - local time) T319 4D-var T levels + new precipitation scheme New Physics New microphysics 3D-var

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic Verification: Bias maps (DJF 2001) Overestimation of orographic precipitation

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic Verification: scatter plots Error distribution

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic Verification: Frequency Distribution Small amounts of precipitation are much more frequent in the forecast than in SYNOP observations 39% 58% % of days <0.1 mm

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic Verification: Heavy rainfall Higher resolution has brought more realistic distributions of heavy rainfall

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic Verification: Does all this make sense? Synop observations catchment area (raingauge) = O(10 -1 m 2 ) Model grid catchment area = O(1000 km 2 )  a large number of independent SYNOP observations per model grid are required for the assessment of the precipitation fluxes in a model grid box. high resolution climatological data - O(10 per model grid box)- are not exchanged in real time, but can be used for a-posteriori verification  two studies recently explored the sensitivity of ECMWF verification to the upscaling of observations (Ghelli and Lalaurette, 2000 used data from Meteo-France while Cherubini et al. used data from MAP)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic verification: Super-observations Synop data collected from the GTS Climatological network (Météo-France)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Deterministic verification: Super observations (2) The bias towards too many light rain events is to a large extend a representativity artifact

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Probabilities of Precipitation PoP can be derived following 2 strategies:  To derive the PDF from past error (conditional) statistics (MOS, Kalman Filter) e.g. using scatter diagrams e.g. using scatter diagrams  To transport a prescribed PDF for initial errors into the future (dynamical or “ensemble” approach) ECMWF runs 50 perturbed forecasts at T255L40 (+ 1 control)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Probabilities of Precipitation (EPSgram) Forecast for Prague, Base time 10/5/ UTC

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Probabilistic Verification What do we want to verify?  Whether probabilities are biased… e.g., when an event is forecast with probability 60%, it should verify 6 times out of 10 (no more, no less!) –but then forecasting with the probability=climate frequency is a “perfect” forecast  … or whether the probabilistic product is useful compared, for example, with a single, deterministic forecast

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Probabilistic Verification: 1) Reliability Diagrams

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Probabilistic Verification: 4) Brier Scores BS=(1/N)  (p-o) 2  p is the probability forecast (relative number of EPS members forecasting the event)  o is the verification (=0 if the event did occur, =1 otherwise) the Brier score varies from 0 (perfect, deterministic forecast) to 1 (perfectly wrong, deterministic forecast) the Brier Skill Score measures the relative performance with respect to the climate (for which p=p c, the relative frequency of occurrence in the long term climate)  BSS=1-(BS/BS C )

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Proba. Verification: Brier Skill Scores Time Series Rnorm + Bugfix T255 Rnorm - Stochastic Phys 60 levels + new precipitation

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Forecast Value: Brier Scores partition The BS can be split into the sample climate uncertainty, the forecast reliability (BS_REL), and the forecast resolution (BS_RSL):  resolution tells how informative the probabilistic forecast is; it varies from zero for a system for which all forecasted probabilities verify with the same frequency of occurrence to the sample uncertainty for a system for which the frequency of verifying occurrences takes only values 0 or 100% (such a system resolves perfectly the forecast between occurring and non-occurring events);  reliability tells how close the frequencies of observed occurrences are from the forecasted probabilities (on average, when an event is forecasted with probability p, it should occur with the same frequency p);  uncertainty varies from 0 to 0.25 and indicates how close to 50% the occurrence of the event was during the sample period (uncertainty is 0.25 when the event is split equally into occurrence and non-occurrence).

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Forecast Value: Categorical Forecasts Categorical forecast - Step 1: event definition  e.g.: will rain exceed 10mm over the 24h period H+72/H+96? Step 2: gather verification data  H=number of good forecasts of the event occurring  M=number of misses (no-forecast but the event occurred)  F=number of false alarms (yes-forecast of a no-event)  Z=number of good forecasts of a no-event False Alarm Rate=F/(F+Z) Hit Rate=H/(H+M)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Value of Probabilistic Categorical Forecast: Relative Operative Characteristics Forecast of the event can be made at different probability levels (10%, 20%, etc…) P>0 P>10% P>20%

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Categorical Forecast Economic Value (Richardson, 2000) Cost/loss ratio (C/L) decision model can be based on several decision-making strategies:  To take preventive action (with cost C) on a systematic basis;  To never take action (and therefore facing loss L when the event occurs);  taking action when the event is forecast by the meteorological model;  taking action when the event occurs (this strategy is based on the availability of a perfect forecast model)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Categorical Forecast Economic Value (Richardson, 2000) Strategies 1 and 2 can be combined  always take action if the cost/loss ratio is smaller than the climatological frequency of occurrence of the event, and not to take action otherwise. The economic value of the meteorological forecast is then computed as the reduction of the expense made possible by the use of the meteorological forecast:

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Categorical Forecast Economic Value (Richardson, 2000)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Refinements of the EPS verification procedures Address the skill over smaller areas (need to gather several events categories - CRPS) Specifically target extreme events (need climatological data) Refine the references (“Poor Man Ensembles”)  Show the ensemble forecast of Z500 is not more skillful than cheaper alternatives (distributions of errors over the previous year and/or multi-model ensemble) (Atger, 1999; Ziemann, 2000)  The ensemble maximum skill seems to be achieved for abnormal situations

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Control, Ensemble Mean and Continuous Ranked Probability Score(CRPS) For a deterministic forecast: CRPS = Mean Absolute Error Ctrl MAE Ens. Mean MAE CRPS Scores are for T2m, 1/3 to 15/4/1999 D+6 forecasts

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: Recent examples A) November French Floods (12-13/11/1999) Inches km

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: November floods T L 319 precip. Acc 72-96h >80mm [40, 80]mm 1100km T L 159 EPS proba precip. >20mm (0.8”) >5% >35% >65%

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: November floods Verification against SYNOP data 0%<p 10%<p 20%<p

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: An EPS Climate 3 years (January 1997 to December 1999)  constant horizontal resolution (T L 159) Monthly basis, valid. 12UTC Europe Lat/Lon grid (0.5x0.5 - oversampling ) T 2m, Precip (24, 120, 240h acc.), 10m-wind speed 50 members (D5+D10) + Control (D0, D5+D10)  around 10,000 events per month  post-processing is fully non-parametric (archived values are all 100 percentiles + 1‰ and 999‰)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: An EPS Climate (2)

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: EPS Climate (November) 24h rain rates exceeded with frequency: 1%1‰

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events : Proposals A better definition of events worth plotting  e.g.: Number of EPS members forecasting values of 10m-wind speeds exceeding the 99% threshold in the “EPS Climate” A non-parametric “Extreme Forecast Index”?  Based on how far the EPS distribution is from the Climate distribution

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events : Extreme Forecast Index By re-scaling  using the climate distribution, we can create a dimensionless, signed measure: The Extreme Forecast Index is: 0% when forecasting the climate distribution, 25% for a determinist forecast of the median, 100% for a deterministic forecast of an extreme A CRPS-like distance between distributions:

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: EFI Maps for November Floods

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Extreme Events: Verification issues The proposal is to extend the products from physical parameters (e.g. amounts of precipitation)  to the forecast of climatological quantiles (e.g. the forecast to day is for a precipitation event that was not occurring more than one time out of 100 in our February climatology) Need local climatologies to rescale the observed values What to do with major model changes?

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Summary Data currently exchanged on the GTS (SYNOP) can only address very crude measures of precipitation forecast performance (biases) or on scales much broader than resolved by the model (e.g. hydrologic basins) High resolution networks are needed to upscale the data from local to model grids; Ensemble forecasts have shown some skill in assessing the probabilities of occurrence in the medium range; an optimum combination of dynamical and statistical PoP remains to achieve

ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 Summary (2) Value of probability forecast compared to pure deterministic forecast of precipitation are easy to establish Some idea of extreme events can be found in the model direct output... provided it is seen from a model perspective A framework for the verification of these extreme events forecasts has been established, but needs gathering long climatological records from a range of stations