SEASONAL COMMON PLOT SCORES 2012-2013 A DRIANO R ASPANTI P ERFORMANCE DIAGRAM BY M.S T ESINI Sibiu - Cosmo General Meeting 2-5 September 2013.

Slides:



Advertisements
Similar presentations
Slide 1ECMWF forecast User Meeting -- Reading, June 2006 Verification of weather parameters Anna Ghelli, ECMWF.
Advertisements

Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
QPF verification of the 4 model versions at 7 km res. (COSMO-I7, COSMO-7, COSMO-EU, COSMO-ME) with the 2 model versions at 2.8 km res. (COSMO- I2, COSMO-IT)
VERIFICATION Highligths by WG5. 9° General MeetingAthens September Working package/Task on “standardization” The “core” Continuous parameters: T2m,
COSMO General Meeting – Roma Sept 2011 Some results from operational verification in Italy Angela Celozzi – Giovanni Favicchio Elena Oberto – Adriano.
Verification of DWD Ulrich Damrath & Ulrich Pflüger.
1 Comparison of WLTP unified database distributions and WLTC rev2 distributions Heinz Steven WLTP WLTP-DHC
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Climate quality data and datasets from VOS and VOSClim Elizabeth Kent and David Berry National Oceanography Centre, Southampton.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
On the impact of the SSO scheme in the COSMO model into the development of a deep cyclone in the Tirrenian sea Case study: April Antonella Morgillo.
COSMO General Meeting – Moscow Sept 2010 Some results from operational verification in Italy Angela Celozzi - Federico Grazzini Massimo Milelli -
Verification Precipitation verification (overestimation): a common view of the behaviour of the LM, aLMo and LAMI Francis Schubiger and Pirmin Kaufmann,
Introducing the Lokal-Modell LME at the German Weather Service Jan-Peter Schulz Deutscher Wetterdienst 27 th EWGLAM and 12 th SRNWP Meeting 2005.
Verification methods - towards a user oriented verification WG5.
In this study, HWRF model simulations for two events were evaluated by analyzing the mean sea level pressure, precipitation, wind fields and hydrometeors.
9° General Meeting Adriano Raspanti - WG5. 9° General MeetingAthens September Future plans CV-VerSUS project future plans COSI “The Score” new package.
We carried out the QPF verification of the three model versions (COSMO-I7, COSMO-7, COSMO-EU) with the following specifications: From January 2006 till.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Conditional verification of all COSMO countries: first.
Model validation Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
The latest results of verification over Poland Katarzyna Starosta Joanna Linkowska COSMO General Meeting, Cracow September 2008 Institute of Meteorology.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Priority project « Advanced interpretation and verification.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
Sensitivity experiments with the Runge Kutta time integration scheme Lucio TORRISI CNMCA – Pratica di Mare (Rome)
HNMS contribution to CONSENS Petroula Louka & Flora Gofa Hellenic National Meteorological Service
Verification Verification with SYNOP, TEMP, and GPS data P. Kaufmann, M. Arpagaus, MeteoSwiss P. Emiliani., E. Veccia., A. Galliani., UGM U. Pflüger, DWD.
10 th COSMO General Meeting, Krakow, September 2008 Recent work on pressure bias problem Lucio TORRISI Italian Met. Service CNMCA – Pratica di Mare.
SREPS Priority Project COSMO General Meeting Cracov 2008 SREPS Priority Project: final report C. Marsigli, A. Montani, T. Paccagnella ARPA-SIM - HydroMeteorological.
Deutscher Wetterdienst Fuzzy and standard verification for COSMO-EU and COSMO-DE Ulrich Damrath (with contributions by Ulrich Pflüger) COSMO GM Rome 2011.
Denis Blinov, G. Rivin, M. Nikitin, A. Bundel, A. Kirsanov, I. Rozinkina Data assimilation system based on nudging for COSMO-Ru7/So2 1COSMO GM Sibiu, 2.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
1 Validation for CRR (PGE05) NWC SAF PAR Workshop October 2005 Madrid, Spain A. Rodríguez.
Short Range Ensemble Prediction System Verification over Greece Petroula Louka, Flora Gofa Hellenic National Meteorological Service.
INSTITUTE OF METEOROLOGY AND WATER MANAGEMENT Hydrological applications of COSMO model Andrzej Mazur Institute of Meteorology and Water Management Centre.
General Meeting Moscow, 6-10 September 2010 High-Resolution verification for Temperature ( in northern Italy) Maria Stefania Tesini COSMO General Meeting.
Overview of WG5 activities and Conditional Verification Project Adriano Raspanti - WG5 Bucharest, September 2006.
VERIFICATION Highligths by WG5. 2 Outlook Some focus on Temperature with common plots and Conditional Verification Some Fuzzy verification Long trends.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
Comparison of LM Verification against Multi Level Aircraft Measurements (MLAs) with LM Verification against Temps Ulrich Pflüger, Deutscher Wetterdienst.
Latest results in the precipitation verification over Northern Italy Elena Oberto, Marco Turco, Paolo Bertolotto (*) ARPA Piemonte, Torino, Italy.
Verification methods - towards a user oriented verification The verification group.
VERIFICATION Highligths by WG5. 2 Outlook The COSMO-Index COSI at DWD Time series of the index and its DWD 2003.
Centro Nazionale di Meteorologia e Climatologia Aeronautica Common Verification Suite Zurich, Sep 2005 Alessandro GALLIANI, Patrizio EMILIANI, Adriano.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
VALIDATION OF HIGH RESOLUTION SATELLITE-DERIVED RAINFALL ESTIMATES AND OPERATIONAL MESOSCALE MODELS FORECASTS OF PRECIPITATION OVER SOUTHERN EUROPE 1st.
COSMO_2005 DWD 9 Sep 2005Page 1 (9) COSMO General Meeting Zürich, September 2005 Erdmann Heise German Weather Service Report on Workpackage
WG5 COSMO General Meeting, Rome 2011 Authors: ALL Presented by Adriano Raspanti.
New results in COSMO about fuzzy verification activities and preliminary results with VERSUS Conditional Verification 31th EWGLAM &16th SRNWP meeting,
Operational Verification at HNMS
COSMO-RU operational verification in Russia using VERSUS2
Current verification results for COSMO-EU and COSMO-DE at DWD
aLMo from GME and IFS boundary conditions: A comparison
Tuning the horizontal diffusion in the COSMO model
Verification of LAMI: QPF over northern Italy and vertical profiles
(Elena Oberto, Massimo Milelli - ARPA Piemonte)
COSMO General Meeting 2009 WG5 Parallel Session 7 September 2009
Verification Overview
Conditional verification of all COSMO countries: first results
The COSMO-LEPS cloud problem over the Alpine region
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Some Verification Highlights and Issues in Precipitation Verification
Verification Overview
Ulrich Pflüger & Ulrich Damrath
Latest Results on Variational Soil Moisture Initialisation
Seasonal common scores plots
Short Range Ensemble Prediction System Verification over Greece
Verification using VERSUS at RHM
VERIFICATION OF THE LAMI AT CNMCA
Presentation transcript:

SEASONAL COMMON PLOT SCORES A DRIANO R ASPANTI P ERFORMANCE DIAGRAM BY M.S T ESINI Sibiu - Cosmo General Meeting 2-5 September 2013

COSMO-EU (DWD) COSMO-RO (NMA) COSMO-GR (HNMS) COSMO-ME (IT) COSMO-7 (MCH) COSMO-I7 (IT) COSMO-PL(IMGW) THE MODELS

Standard Verification Period: JJA 2012, SON 2012, DJF2012/2013, MAM 2013 Run: 00 UTC run Continuous parameters - T2m, Td2m, Mslp, Wspeed, TCC Scores : ME, RMSE Forecasts Step: every 3 hours Dichotomic parameters - Precipitation: Scores: FBI-POD-FAR-TS with Performance Diagram Cumulating: 6h and 24h Thresholds: 0.2, 0.4, 0.6, 0.8, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20 mm/6h and mm/24h

Conditional Verification 2mT verification with the following criteria (1 condition): Total cloud cover >= 75% (overcast condition) (condition based on observations) Total cloud cover <= 25% (clear sky condition) (condition based on observations) 2mT verification with the following criteria (2 conditions): Total cloud cover >= 75% (overcast condition) AND Wind Speed<2.5 m/s (condition based on observations) Total cloud cover <= 25% (clear sky condition) AND Wind Speed<2.5 m/s (condition based on observations)

Standard Verification on Common AreaNEW! Period: DJF2012/2013, MAM 2013 Run: 00 UTC run Continuous parameters - T2m, Td2m, Mslp, Wspeed, TCC Scores : ME, RMSE Forecasts Step: every 3 hours Dichotomic parameters - Precipitation: Scores: FBI-POD-FAR-TS with Performance Diagram Cumulating: 6h and 24h Thresholds: 0.2, 0.4, 0.6, 0.8, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20 mm/6h and mm/24h

Standard Verification on Common Area

TEMPERATURE AT 2 M - JJA 2012 – MAM 2013 Significant diurnal cycle with almost the same phase for all models: more pronounced negative bias during daytime especially DJF and MAM. Tendency to underestimation DJF and MAM for almost all the model (no C-RU)

DEW POINT TEMPERATURE - JJA MAM 2013 The behaviour of the models is quite different depending on the season. Peculiar ME for CRU in DJF for overestimation (higher RMSE) while the others underestimate and C-I7 (DJF and MAM) almost shifted the diurnal variation. RMSE quite variable. Many models show underestimation (CGR always)

MEAN SEA LEVEL PRESSURE - JJA 2012 – MAM 2013 The behaviour of the models different depending on the season. Worth noting strong underestimation in MAM for some models especially CEU and strong overestimation of CGR, CME, CI7 in DJF

WIND SPEED AT 10 M - JJA 2011 – MAM 2012 C-GR shows always bias around 0 (less in DJF), while the others separate in 2 groups IFS driven (under) and GME driven (over), but maybe also with less complex orography. In RMSE peculiar behaviour for C-EU that has lower values.

TOTAL CLOUD COVER JJA 2012 – MAM 2013 Clear diurnal cycle for all the models with a general tendency to overestimation except for C-EU that has the opposite behaviour especially in SON and JJA. C-GR shows very low RMSE in JJA

2MT IN S KY C LEAR CONDITIONS - JJA 2012 – MAM 2013 Clear diurnal cycle for all the models with a general tendency to underestimation in DJF and MAM (maybe poor sample) and amplitude of the error pronounced. RMSE between 2° and 4-5°.

2MT IN O VERCAST CONDITIONS - JJA 2012 – MAM 2013 Diurnal cycle for all the models almost disappeared. Me is around 0 in SON (except CGR) while for DJF and MAM tendency to underestimation except CME and CEU in MAM. RMSE generally lower than the previous condition.

2MT IN O VERCAST CONDITIONS AND N O ADVECTION - JJA 2012 – MAM 2013 Less evident diurnal cycle for all the models. Me is in general around with tendency to underestimation in DJF and MAM. RMSE generally lower in DJF and MAM than the other seasons

2MT IN C LEAR S KY CONDITIONS AND N O ADVECTION - JJA 2012 – MAM 2013 Clear diurnal cycle for all the models with a general tendency to underestimation in DJF and MAM (maybe poor sample) during daytime and amplitude of the error pronounced. RMSE between 2° and 5° and with diurnal cycle too.

Standard Verification on Common Area

TEMPERATURE AT 2 M - DJF 2013 – MAM 2013 DJF and MAM: ME, CPL CGR and increase underestimation in the CA while CEU, CME and CI7 decrease this tendency. RMSE in CA worse for CPL and CGR, while CI7, CEU and CME slightly improve. VD CA

MEAN SEA LEVEL PRESSURE - DJF 2013 – MAM 2013 DJF: CEU shows in both domains no bias value. CI7, CME and CGR show in VD overestimation that disappears in the CA (maybe due to the extreme variability of weather in the countries. It is worth to note the evident peculiar underestimation of CRO in VD. RMSE shows an improvement for CI7 and CEU, a steady value for CME and CGR, a clear worsening for CPL in the CA MAM: as ME CEU shows in both domains underestimation. CPL shows almost no bias value for its own domain, but is overestimated in CA. Other models can be considered steady. RMSE shows an improvement for CRU and CEU, CGR shows higher values. CPL RMSE in CA is out of scale. VD CA

WIND SPEED AT 10 M - DJF 2013 – MAM 2013 DJF and MAM: CME and CI7 underestimate in VD but overestimate in the CA. CPL and CGR show similar ME, while CEU is slightly worse. CME, CI7 and CGR improve dramatically RMSE. VD CA

TOTAL CLOUD COVER DJF 2013 – MAM 2013 DJF : Models looks similar, but with some peculiarities. The spread among the models of ME and RMSE is smaller for CA, but CEU, CPL worsen their performance, while CME and CI7 improve for both ME and RMSE. MAM: Models looks similar, but with some peculiarities. The spread among the models of ME and RMSE is smaller for CA. CPL performances are worse in CA. All the models are generally overestimated and behave better in the CA in terms of RMSE. VD CA

S OME POINTS TO REMEMBER ABOUT PRECIPITATION VERIFICATION : The purpose of these plots is to see the overall performance of COSMO model Relative comparison is not fair because models are different (ic/bc, assimilation cycle, model version, region characteristics, number of stations used) Only some thresholds and cumulation time have been considered they identify different rainfall regime depending on seasons and geographical characteristics

P ERFORMANCE DIAGRAM In the graph is exploited the geometric relationship between four measures of dichotomous forecast performance: probability of detection (POD) success ratio(SR, defined as 1-FAR) bias score (BS) threat score (TS, also known as the Critical Suc-cess Index). For good forecasts, POD, SR, bias and TS approach unity, such that a perfect forecast lies in the upper right of the diagram. The cross-hairs about the verification point represent the influence of the sampling variability. They are estimated using a form of resampling with replacement bootstrapping from the verification data (from the contingency table). The bars represents the 95 th percentile range for SR and POD.

C UMULATION PERIOD : 24 h All the models start at 00 UTC so we considered:  + 0h to +24h (day 1)  +24h to +48h (day2)  +48h to +72h (day3) Reference threshold:  0.2 mm  2 mm  10 mm  20 mm

THRESHOLD Overestimation for most of the models, except for JJA.

THRESHOLD Good FBI in SON, except CPL always overestimated. Low performances in JJA. Overestimation in DJF and except CGR. Note D2 and D3 for CGR in MAM (generally overestimated all the models)

THRESHOLD Generally worse increasing the threshold. Good FBI in SON. High variability for CGR (the sample?)

THRESHOLD Generally worse increasing the threshold with but low POD.

I NTER - COMPARISON OVER THE SAME DOMAIN In the previous diagrams the shown scores were evaluated on each own country Now on the common domain (only DJF and MAM)

THRESHOLD VD CA In CA the differences between models are less evident than in their own domains, maybe due to the diversity of the rain regimes. High FBI, except CPL in MAM

THRESHOLD Again less differences in CA. CEU, C7 and CRU grouped together. High FBI VD CA

THRESHOLD Again less differences in CA. CEU, C7 and CRU grouped together VD CA

THRESHOLD Low values increasing the thres. Especially CEU and C7 (also CRU) in both domain. VD CA

C UMULATION PERIOD : 6 h We considered for the 6h cumulation period only the first day of forecast:  + 0h to +6h  + 6h to +12h  + 12h to +18h  + 18h to +24h Reference threshold:  0.2 mm  5 mm  10 mm

THRESHOLD Models show high FBI. High variability for CGR

THRESHOLD Low values in JJA and DJF for CPL. DJF and MAM better FBI

THRESHOLD CPL keeps low performances (all in JJA). Note the high variability of the scores also differences with fcs steps.

I NTER - COMPARISON OVER THE SAME DOMAIN In the previous diagrams the shown scores were evaluated on each own country Now on the common domain (only DJF and MAM)

THRESHOLD Again models are grouped together in CA, except CPL in MAM, with tendency to overestimation VD CA

THRESHOLD Again models are grouped together in CA, except CPL in MAM, with tendency non bias position. Not dramatic differences with fcs steps. VD CA

THRESHOLD VD CA Again models are grouped together in CA with lower performances and tendency to underestimate especially in DJF

CONCLUSION To be discussed together!