COSMO-LEPS Verification

Slides:



Advertisements
Similar presentations
Verification of Probabilistic Forecast J.P. Céron – Direction de la Climatologie S. Mason - IRI.
Advertisements

Climate Modeling LaboratoryMEASNC State University An Extended Procedure for Implementing the Relative Operating Characteristic Graphical Method Robert.
ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
The COSMO-LEPS system at ECMWF
Verification of probability and ensemble forecasts
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Ensemble activities in COSMO C. Marsigli, A. Montani, T. Paccagnella ARPA-SIM - HydroMeteorological Service of Emilia-Romagna, Bologna, Italy.
Bookmaker or Forecaster? By Philip Johnson. Jersey Meteorological Department.
Edpsy 511 Homework 1: Due 2/6.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities.
1 On the use of radar data to verify mesoscale model precipitation forecasts Martin Goeber and Sean Milton Model Diagnostics and Validation group Numerical.
Verification of ensembles Courtesy of Barbara Brown Acknowledgments: Tom Hamill, Laurence Wilson, Tressa Fowler Copyright UCAR 2012, all rights reserved.
Rank Histograms – measuring the reliability of an ensemble forecast You cannot verify an ensemble forecast with a single.
4IWVM - Tutorial Session - June 2009 Verification of categorical predictands Anna Ghelli ECMWF.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
A.Montani; The COSMO-LEPS system: recent developments and plans 2nd Workshop on Short-Range EPS, Bologna, 7-8 April 2005 The COSMO-LEPS system: recent.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Data mining in the joint D- PHASE and COPS archive Mathias.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
Verification of the distributions Chiara Marsigli ARPA-SIM - HydroMeteorological Service of Emilia-Romagna Bologna, Italy.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to.
Verification of IRI Forecasts Tony Barnston and Shuhua Li.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
2nd SRNWP Workshop on “Short-range ensembles” – Bologna, 7-8 April Verification of ensemble systems Chiara Marsigli ARPA-SIM.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
ECMWF Training Course Reading, 25 April 2006 EPS Diagnostic Tools Renate Hagedorn European Centre for Medium-Range Weather Forecasts.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Verification and Metrics (CAWCR)
© 2009 UCAR. All rights reserved. ATEC-4DWX IPR, 21−22 April 2009 National Security Applications Program Research Applications Laboratory Ensemble-4DWX.
Edpsy 511 Exploratory Data Analysis Homework 1: Due 9/19.
A.Montani; Performance of the COSMO-LEPS system during DOP COSMO meeting - Cracow September 2008 Performance of the COSMO-LEPS system during D-PHASE.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Langen, September 2003 COSMO-LEPS objective verification Chiara Marsigli, Francesco Boccanera, Andrea Montani, Fabrizio Nerozzi, Tiziana Paccagnella.
Short Range Ensemble Prediction System Verification over Greece Petroula Louka, Flora Gofa Hellenic National Meteorological Service.
Common verification methods for ensemble forecasts
Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
Verification methods - towards a user oriented verification The verification group.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
Biostatistics Class 2 Probability 2/1/2000.
FREQUENCY DISTRIBUTION
Notes for Interpretation
LEPS VERIFICATION ON MAP CASES
André Walser MeteoSwiss, Zurich
Verifying and interpreting ensemble products
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
Arpae Hydro-Meteo-Climate Service, Bologna, Italy
COSMO Priority Project ”Quantitative Precipitation Forecasts”
A.Montani; The COSMO-LEPS system.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Post Processing.
COSMO-LEPS verification
Probabilistic forecasts
Validation-Based Decision Making
Quantitative verification of cloud fraction forecasts
Verification of COSMO-LEPS and coupling with a hydrologic model
Preliminary test for the development of a 2.8km ensemble over Italy
Tropical storm intra-seasonal prediction
GloSea4: the Met Office Seasonal Forecasting System
Alan F. Hamlet, Andrew W. Wood, Dennis P. Lettenmaier,
Verification of SPE Probability Forecasts at SEPC
the performance of weather forecasts
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

COSMO-LEPS Verification Chiara Marsigli ARPA-SMR

Available italian stations

Verification on station points Bi-linear interpolation (4 nearest points) X

Verification on super-boxes Average value – maximum value – frequency OBS. PRED. X

OVERLAPPING BOXES X X X X

ROC area Need for LM verification at these fc ranges COSMO-LEPS vs observations station points weighted Need for LM verification at these fc ranges ROC area Nov 02 – Dec 02 – Jan 03

COSMO-LEPS vs observations station points not weighted ROC area

w +48 h nw

w +120 h nw

COSMO-LEPS vs observations station points weighted Brier Skill Score

Brier Skill Score COSMO-LEPS vs observations station points not weighted Brier Skill Score

Brier Skill Score

Weighting procedure It is possible to decide (in real-time) if it is better to weight or not to weight? Dependence from ensemble spread? Flow dependence?

Brier Skill Score station points average values on super-boxes

Brier Skill Score station points average values on super-boxes

Brier Skill Score Nov 02 only

Brier Skill Score

ROC area contingency table Observed Yes No Forecast a b c d A contingency table can be built for each probability class (a probability class can be defined as the % of ensemble elements which actually forecast a given event). For the k-th probability class: The area under the ROC curve is used as a statistic measure of forecast usefulness

Brier Skill Score oi = 1 if the event occurs Brier Score oi = 1 if the event occurs = 0 if the event does not occur  fi is the probability of occurrence according to the forecast system (e.g. the fraction of ensemble members forecasting the event) BS can take on values in the range [0,1], a perfect forecast having BS = 0 The forecast system has predictive skill if BSS is positive, a perfect system having BSS = 1. Brier Skill Score = total frequency of the event (sample climatology)