Verification of ensemble systems Chiara Marsigli ARPA-SIMC.

Slides:



Advertisements
Similar presentations
Climate Modeling LaboratoryMEASNC State University An Extended Procedure for Implementing the Relative Operating Characteristic Graphical Method Robert.
Advertisements

ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Training Course 2009 – NWP-PR: Ensemble Verification II 1/33 Ensemble Verification II Renate Hagedorn European Centre for Medium-Range Weather Forecasts.
What is a good ensemble forecast? Chris Ferro University of Exeter, UK With thanks to Tom Fricker, Keith Mitchell, Stefan Siegert, David Stephenson, Robin.
What is a good ensemble forecast? Chris Ferro University of Exeter, UK With thanks to Tom Fricker, Keith Mitchell, Stefan Siegert, David Stephenson, Robin.
1 Verification Continued… Holly C. Hartmann Department of Hydrology and Water Resources University of Arizona RFC Verification Workshop,
Guidance of the WMO Commission for CIimatology on verification of operational seasonal forecasts Ernesto Rodríguez Camino AEMET (Thanks to S. Mason, C.
14 May 2001QPF Verification Workshop Verification of Probability Forecasts at Points WMO QPF Verification Workshop Prague, Czech Republic May 2001.
Verification of probability and ensemble forecasts
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Improving COSMO-LEPS forecasts of extreme events with.
Seasonal Predictability in East Asian Region Targeted Training Activity: Seasonal Predictability in Tropical Regions: Research and Applications 『 East.
Bookmaker or Forecaster? By Philip Johnson. Jersey Meteorological Department.
Creating probability forecasts of binary events from ensemble predictions and prior information - A comparison of methods Cristina Primo Institute Pierre.
Introduction to Probability and Probabilistic Forecasting L i n k i n g S c i e n c e t o S o c i e t y Simon Mason International Research Institute for.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
1 Intercomparison of low visibility prediction methods COST-722 (WG-i) Frédéric Atger & Thierry Bergot (Météo-France)
Barbara Casati June 2009 FMI Verification of continuous predictands
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Evaluation of Potential Performance Measures for the Advanced Hydrologic Prediction Service Gary A. Wick NOAA Environmental Technology Laboratory On Rotational.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
© Crown copyright Met Office Operational OpenRoad verification Presented by Robert Coulson.
Richard (Rick)Jones Regional Training Workshop on Severe Weather Forecasting Macau, April 8 -13, 2013.
ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities.
Verification of ensembles Courtesy of Barbara Brown Acknowledgments: Tom Hamill, Laurence Wilson, Tressa Fowler Copyright UCAR 2012, all rights reserved.
4IWVM - Tutorial Session - June 2009 Verification of categorical predictands Anna Ghelli ECMWF.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
A.Montani; The COSMO-LEPS system: recent developments and plans 2nd Workshop on Short-Range EPS, Bologna, 7-8 April 2005 The COSMO-LEPS system: recent.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Data mining in the joint D- PHASE and COPS archive Mathias.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
Verification methods - towards a user oriented verification WG5.
WORKSHOP ON SHORT-RANGE ENSEMBLE PREDICTION USING LIMITED-AREA MODELS Instituto National de Meteorologia, Madrid, 3-4 October 2002 Limited-Area Ensemble.
Verification of the distributions Chiara Marsigli ARPA-SIM - HydroMeteorological Service of Emilia-Romagna Bologna, Italy.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
2nd SRNWP Workshop on “Short-range ensembles” – Bologna, 7-8 April Verification of ensemble systems Chiara Marsigli ARPA-SIM.
Hydrometeorological Prediction Center HPC Experimental PQPF: Method, Products, and Preliminary Verification 1 David Novak HPC Science and Operations Officer.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
ECMWF Training Course Reading, 25 April 2006 EPS Diagnostic Tools Renate Hagedorn European Centre for Medium-Range Weather Forecasts.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Verification and Metrics (CAWCR)
Standard Verification Strategies Proposal from NWS Verification Team NWS Verification Team Draft03/23/2009 These slides include notes, which can be expanded.
EUMETCAL NWP-course 2007: The Concept of Ensemble Forecasting Renate Hagedorn European Centre for Medium-Range Weather Forecasts The General Concept of.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss A more reliable COSMO-LEPS F. Fundel, A. Walser, M. A.
Applying Ensemble Probabilistic Forecasts in Risk-Based Decision Making Hui-Ling Chang 1, Shu-Chih Yang 2, Huiling Yuan 3,4, Pay-Liam Lin 2, and Yu-Chieng.
Furthermore… References Katz, R.W. and A.H. Murphy (eds), 1997: Economic Value of Weather and Climate Forecasts. Cambridge University Press, Cambridge.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Common verification methods for ensemble forecasts
Verification methods - towards a user oriented verification The verification group.
Predicting Intense Precipitation Using Upscaled, High-Resolution Ensemble Forecasts Henrik Feddersen, DMI.
Evaluation of Precipitation from Weather Prediction Models, Satellites and Radars Charles Lin Department of Atmospheric and Oceanic Sciences McGill University,
LEPS VERIFICATION ON MAP CASES
Addressing the environmental impact of salt use on roads
Verifying and interpreting ensemble products
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
forecasts of rare events
Probabilistic forecasts
Application of a global probabilistic hydrologic forecast system to the Ohio River Basin Nathalie Voisin1, Florian Pappenberger2, Dennis Lettenmaier1,
Quantitative verification of cloud fraction forecasts
COSMO-LEPS Verification
Deterministic (HRES) and ensemble (ENS) verification scores
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Verification of SPE Probability Forecasts at SEPC
the performance of weather forecasts
What is a good ensemble forecast?
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Verification of ensemble systems Chiara Marsigli ARPA-SIMC

Deterministic forecasts Event E e.g.: the precipitation cumulated over 24 hours at a given location (raingauge, radar pixel, hydrological basin, area) exceeds 20 mm yes o(E) = 1 no o(E) = 0 the event is observed with frequency o(E) the event is forecast with probability p(E) yes p(E) = 1 no p(E) = 0

Probabilistic forecasts Event E e.g.: the precipitation cumulated over 24 hours at a given location (raingauge, radar pixel, hydrological basin, area) exceeds 20 mm yes o(E) = 1 no o(E) = 0 the event is observed with frequency o(E) the event is forecast with probability p(E) p(E) [0,1]

Ensemble forecasts Event E e.g.: the precipitation cumulated over 24 hours at a given location (raingauge, radar pixel, hydrological basin, area) exceeds 20 mm yes o(E) = 1 no o(E) = 0 the event is observed with frequency o(E) M member ensemble the event is forecast with probability p(E) = k/M no member p(E) = 0 all members p(E) = 1

Probabilistic forecasts An accurate probability forecast system has:  reliability - agreement between forecast probability and mean observed frequency  sharpness - tendency to forecast probabilities near 0 or 1, as opposed to values clustered around the mean  resolution - ability of the forecast to resolve the set of sample events into subsets with characteristically different outcomes

Scalar summary measure for the assessment of the forecast performance, RMS error of the probability forecast n = number of points in the “domain” (spatio-temporal) o ik = 1 if the event occurs = 0 if the event does not occur f k is the probability of occurrence according to the forecast system (e.g. the fraction of ensemble members forecasting the event) BS can take on values in the range [0,1], a perfect forecast having BS = 0 Brier Score Sensitive to climatological frequency of the event: the more rare an event, the easier it is to get a good BS without having any real skill

M = ensemble size K = 0, …, M number of ensemble members forecasting the event (probability classes) N = total number of point in the verification domain N k = number of points where the event is forecast by k members = frequency of the event in the sub-sample N k reliability resolutionuncertainty = total frequency of the event (sample climatology) Brier Score decomposition Murphy (1973)

The forecast system has predictive skill if BSS is positive, a perfect system having BSS = 1. = total frequency of the event (sample climatology) Brier Skill Score Measures the improvement of the probabilistic forecast relative to a reference forecast (e. g. sample climatology)

Extension of the Brier Score to multi-event situation. The squared errors are computed with respect to the cumulative probabilities in the forecast and observation vectors. M = number of forecast categories o ik = 1 if the event occurs in category k = 0 if the event does not occur in category k f k is the probability of occurrence in category k according to the forecast system (e.g. the fraction of ensemble members forecasting the event) RPS take on values in the range [0,1], a perfect forecast having RPS = 0 Ranked Probability Score

contingency table Observed YesNo ForecastYesab Nocd A contingency table can be built for each probability class (a probability class can be defined as the % of ensemble elements which actually forecast a given event) Hit Rate False Alarm Rate ROC Curves (Relative Operating Characteristics, Mason and Graham 1999)

For the k-th probability class: Hit rates are plotted against the corresponding false alarm rates to generate the ROC Curve. The area under the ROC curve is used as a statistic measure of forecast usefulness. A value of 0.5 indicates that the forecast system has no skill. ROC Curve k-th probability class: E is forecast if it is forecast by at least k ensemble members => a warning can be issued when the forecast probability for the predefined event exceeds some threshold “At least 0 members” (always) “At least M+1 members” (never) x x x x x x x x x x x

Cost-loss Analysis With a deterministic forecast system, the mean expense for unit loss is: ME = contingency table Observed YesNo ForecastYesab Nocd is the sample climatology (the observed frequency) V k = Value Gain obtained using the system instead of the climatological information, percentage with respect to the gain obtained using a perfect system Decisional model E happens yesno U take action yesCC noL0 If the forecast system is probabilistic, the user has to fix a probability threshold k. When this threshold is exceeded, it take protective action.

Cost-loss Analysis Curves of V k as a function of C/L, a curve for each probability threshold. The area under the envelope of the curves is the cost-loss area.

Reliability Diagram o(p) is plotted against p for some finite binning of width dp In a perfectly reliable system o(p)=p and the graph is a straight line oriented at 45 o to the axes If the curve lies below the 45° line, the probabilities are overestimated If the curve lies above the 45° line, the probabilities are underestimated

Sharpness Refers to the spread of the probability distributions. It is expressed as the capability of the system to forecast extreme values, or values close 0 or 1. The frequency of forecasts in each probability bin (shown in the histogram) shows the sharpness of the forecast.

Rank histogram (Talagrand Diagram) Rank histogram of the distribution of the values forecast by an ensemble range of forecast value V1V2V3V4V5 Outliers below the minimum Outliers above the maximum IIIIIIIV Percentage of Outliers: percentage of points where the observed value lies out of the range of forecast values.

bibliography    Bougeault, P., WGNE recommendations on verification methods for numerical prediction of weather elements and severe weather events (CAS/JSC WGNE Report No. 18)  Jolliffe, I.T. and D.B. Stephenson, Forecast Verification: A Practitioner’s Guide. In Atmospheric Sciences (Wiley).  Pertti Nurmi, Recommendations on the verification of local weather forecasts. ECMWF Technical Memorandum n  Stanski, H.R., L.J. Wilson and W.R. Burrows, Survey of Common Verification Methods in Meteorology (WMO Research Report No. 89-5)  Wilks D. S., Statistical methods in atmospheric sciences. Academic Press, New York, 467 pp.

bibliography  Hamill, T.M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14,  Mason S.J. and Graham N.E., “Conditional probabilities, relative operating characteristics and relative operating levels”. Wea. and Forecasting, 14,  Murphy A.H., A new vector partition of the probability score. J. Appl. Meteor., 12,  Richardson D.S., “Skill and relative economic value of the ECMWF ensemble prediction system”. Quart. J. Roy. Meteor. Soc., 126,  Talagrand, O., R. Vautard and B. Strauss, Evaluation of probabilistic prediction systems. Proceedings, ECMWF Workshop on Predictability.