Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to.

Slides:



Advertisements
Similar presentations
Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
Advertisements

Chapter 13 – Weather Analysis and Forecasting
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
© The Aerospace Corporation 2014 Observation Impact on WRF Model Forecast Accuracy over Southwest Asia Michael D. McAtee Environmental Satellite Systems.
Statistical Postprocessing of Weather Parameters for a High-Resolution Limited-Area Model Ulrich Damrath Volker Renner Susanne Theis Andreas Hense.
The use of the NWCSAF High Resolution Wind product in the mesoscale AROME model at the Hungarian Meteorological Service Máté Mile, Mária Putsay and Márta.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
1 Intercomparison of low visibility prediction methods COST-722 (WG-i) Frédéric Atger & Thierry Bergot (Météo-France)
Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD Istvan Szunyogh representing the UMD “Chaos-Weather” Group Ensemble.
The Expanded UW SREF System and Statistical Inference STAT 592 Presentation Eric Grimit 1. Description of the Expanded UW SREF System (How is this thing.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Francesca Marcucci, Lucio Torrisi with the contribution of Valeria Montesarchio, ISMAR-CNR CNMCA, National Meteorological Center,Italy First experiments.
WWOSC 2014 Assimilation of 3D radar reflectivity with an Ensemble Kalman Filter on a convection-permitting scale WWOSC 2014 Theresa Bick 1,2,* Silke Trömel.
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Performance of the MOGREPS Regional Ensemble
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
Dr Mark Cresswell Model Assimilation 69EG6517 – Impacts & Models of Climate Change.
Copyright 2012, University Corporation for Atmospheric Research, all rights reserved Verifying Ensembles & Probability Fcsts with MET Ensemble Stat Tool.
Details for Today: DATE:18 th November 2004 BY:Mark Cresswell FOLLOWED BY:Literature exercise Model Assimilation 69EG3137 – Impacts & Models of Climate.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
Advances in the use of observations in the ALADIN/HU 3D-Var system Roger RANDRIAMAMPIANINA, Regina SZOTÁK and Gabriella Csima Hungarian Meteorological.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Statistical Characteristics of High- Resolution COSMO.
WWOSC 2014, Aug 16 – 21, Montreal 1 Impact of initial ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss High-resolution data assimilation in COSMO: Status and.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
Assimilating satellite cloud information with an Ensemble Kalman Filter at the convective scale Annika Schomburg, Christoph Schraff EUMETSAT fellow day,
COSMO General Meeting, Offenbach, 7 – 11 Sept Dependance of bias on initial time of forecasts 1 WG1 Overview
© Crown copyright Met Office Plans for Met Office contribution to SMOS+STORM Evolution James Cotton & Pete Francis, Satellite Applications, Met Office,
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Use of radar data in ALADIN Marián Jurašek Slovak Hydrometeorological Institute.
Nudging Radial Velocity, OPERA, LHN for COSMO-EU COSMO GM, Sibiu, 2 September Recent developments at DWD
Verification Verification with SYNOP, TEMP, and GPS data P. Kaufmann, M. Arpagaus, MeteoSwiss P. Emiliani., E. Veccia., A. Galliani., UGM U. Pflüger, DWD.
© Crown copyright Met Office The EN4 dataset of quality controlled ocean temperature and salinity profiles and monthly objective analyses Simon Good.
Assimilating satellite cloud information with an Ensemble Kalman Filter at the convective scale Annika Schomburg, Christoph Schraff, Hendrik Reich, Roland.
Status of PP KENDA COSMO General Meeting, Sibiu, 2 – 5 Sept Status Report PP KENDA Christoph Schraff Deutscher Wetterdienst,
Application of COSMIC refractivity in Improving Tropical Analyses and Forecasts H. Liu, J. Anderson, B. Kuo, C. Snyder, and Y. Chen NCAR IMAGe/COSMIC/MMM.
WP 3: DATA ASSIMILATION SMHI/FMI Status report 3rd CARPE DIEM meeting, University of Essex, Colchester, 9-10 January 2003 Structure SMHI/FMI plans for.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Short Range Ensemble Prediction System Verification over Greece Petroula Louka, Flora Gofa Hellenic National Meteorological Service.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
NCAR April 1 st 2003 Mesoscale and Microscale Meteorology Data Assimilation in AMPS Dale Barker S. Rizvi, and M. Duda MMM Division, NCAR
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
Instruments. In Situ In situ instruments measure what is occurring in their immediate proximity. E.g., a thermometer or a wind vane. Remote sensing uses.
Gridded WAFS Icing Verification System Matt Strahan WAFC Washintgon.
Verification methods - towards a user oriented verification The verification group.
Deutscher Wetterdienst FE VERSUS 2 Priority Project Meeting Langen Use of Feedback Files for Verification at DWD Ulrich Pflüger Deutscher.
Assimilating Cloudy Infrared Brightness Temperatures in High-Resolution Numerical Models Using Ensemble Data Assimilation Jason A. Otkin and Rebecca Cintineo.
Global vs mesoscale ATOVS assimilation at the Met Office Global Large obs error (4 K) NESDIS 1B radiances NOAA-15 & 16 HIRS and AMSU thinned to 154 km.
ALADIN 3DVAR at the Hungarian Meteorological Service 1 _____________________________________________________________________________________ 27th EWGLAM.
RUC Convective Probability Forecasts using Ensembles and Hourly Assimilation Steve Weygandt Stan Benjamin Forecast Systems Laboratory NOAA.
Station lists and bias corrections Jemma Davie, Colin Parrett, Richard Renshaw, Peter Jermey © Crown Copyright 2012 Source: Met Office© Crown copyright.
Coordination Group for Meteorological Satellites - CGMS Korea Meteorological Administration, May 2015 Satellite Data Application in KMA’s NWP Systems Presented.
June 20, 2005Workshop on Chemical data assimilation and data needs Data Assimilation Methods Experience from operational meteorological assimilation John.
Slide 1 Investigations on alternative interpretations of AMVs Kirsti Salonen and Niels Bormann 12 th International Winds Workshop, 19 th June 2014.
Satellite data monitoring
BACY = Basic Cycling A COSMO Data Assimilation Testbed for Research and Development Roland Potthast, Hendrik Reich, Christoph Schraff, Klaus.
How do models work? METR 2021: Spring 2009 Lab 10.
Verifying and interpreting ensemble products
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Probabilistic forecasts
COSMO-LEPS Verification
Comparison of different combinations of ensemble-based and variational data assimilation approaches for deterministic NWP Mark Buehner Data Assimilation.
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Short Range Ensemble Prediction System Verification over Greece
Advances in Rfdbk based Verification at DWD
Presentation transcript:

requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to be) included in the LETKF / 3DVAR package) - format is documented in the ‘Feedback File Definition’ Observation Header: incl. status and analysis flags Observation Body: observation value plus meta information for each single (bias corrected) obs value, incl.: –bias correction (corrected minus reported value) –level significance (e.g. TEMPs: surface, standard, significant, max. wind, tropopause) –status (accepted, active, merged, passive, rejected, passive and rejected) –analysis flags (bit flag table with quality check flags, e.g. passive obs type, blacklisted, dataset quality flag, redundant, thinning, gross error, rejected by first guess check, etc.) –quality (e.g. observation confidence) Verification Data section:model analysis & forecast values projected onto observation space (i.e. after applying the obs operator) for ensembles: analysis and forecasts from each ensemble member, but also ensemble mean, spread and potentially other quantities (e.g. position of the obs value within the ensemble of model values, as input for Talagrand diagrams).  NetCDF feedback files contain all information required Data Input : NetCDF feedback files Requirements from KENDA on the verification

requirements from KENDA on the verification For each observation and model run, there will be 1 model-equivalent for the observation value. +In general, the observation operator of the assimilation scheme is used to compute this model-equivalent +The observation quality / analysis flags can be taken into account –There is no posterior flexibility for how to compute the model-equivalent (e.g. use of nearest grid point, or interpolation, or a whole environment of grid points e.g. for cloud cover, etc.) The feedback file format does not (yet ?) contain observation analysis data on the model grid which are typically used e.g. for fuzzy or object oriented verification. The feedback files may not necessarily include the model equivalent for all observations which are non-local in time. Remarks on the use of NetCDF feedback files for verification :

requirements from KENDA on the verification Simple conditional verification: use meta information as conditions. Specifically: –status (priority 1) –analysis flags (priority 2 for blacklist and dataset flags, priority 3 for other flags) –level significance (it should be possible to exclude from TEMP verification the surface (if this is not the standard anyway) or tropopause or significant levels; this issue is not specific for observations from feedback files) (priority 1 for surface level) –quality / observation confidence (priority 2)  (Rather limited number of conditions, probably no need for relational data base) Requirements on verified quantities and applicable conditions: Features not specific to KENDA, but required e.g. for model system development in general (!) : Several feedback files from different experiments must be able to be processed simultaneously for comparison between experiments (or between an experiment and the operational run) (priority 1) It must be made sure that the same set of observations is used for all experiments in a comparison ! (priority 1) –(For this purpose and to allow for full flexibility, the status and analysis flags should be assigned internally to each experiment separately (for each observation), so that it is possible to specify that only observations having set a certain flag or not having set a flag in all (!) experiments should be used in a comparison.)

requirements from KENDA on the verification Need choice for using in verification either bias corrected observation values or original reported observation values (priority 1) For comparison between experiments with different bias corrections (incl. zero bias correction), need choice between two options using bias corrected observation values: –either each experiment is verified against its own observation values (but using the same set of observations) (  standard case) –or the (bias corrected) observation values from one experiment are used to verify the model values from both experiments,  this requires to compute the differences between bias corrections (priority 2, could become priority 1 if bias correction for conventional observations becomes a big issue ) Requirements on verified quantities and applicable conditions: Features not specific to KENDA, but required e.g. for model system development in general (!) : Need ability to plot observations from a single report (e.g. a vertical temperature profile) together with (a subset of) an ensemble of model values (sort of a spaghetti plot), or to plot an ensemble of model minus observation values (like in Uli Pflügers TEMP verification tool) (priority 1)

requirements from KENDA on the verification It must be possible to plot the resulting score of the verification, e.g. RMSE, in the same plot as the ensemble spread (priority 1) In KENDA, the ensemble spread will be typically stored in the NetCDF feedback files. However, (with lower priority) VERIF should also be able to compute quantities such as the ensemble spread by itself from the values of the ensemble members. Requirements on verified quantities and applicable conditions: Features specific to KENDA : Specific ensemble scores include: –Talagrand diagrams –ROC curves / area (hit rate versus false alarm rate, computed for many thresholds) –Reliability (uses bins (threshold), difference betw. forecast and true probabilities) –Brier Skill Score (uses a threshold (or bins), compares to reference, e.g. climatology, persistence) (BSS = 1 – BS/BSref where BS = reliability – resolution + uncertainty) –Rank Probability Skill Score (categorised BSS) (priority 2, except for Talagrand diagrams: priority 1)

requirements from KENDA on the verification Requirements on verified quantities and applicable conditions: Features that may be partly specific to KENDA : Observation types include: –upper-air obs (radiosonde, aircraft, wind profiler / RASS, …) –surface (synoptic, ship, buoy, (scatterometer), … ) : not only surface (station) pressure, 2-m temperature and humidty, 10-m wind, but also (total, low, middle, high) cloud cover, cloud base height, visibility (?), etc. –radar: radial wind, 3-dim. reflectivity –later on: GPS zenith delay ; GPS slant path delay –later on: satellite radiances –later on, possibly: pre-processed cloud analysis data –…

requirements from KENDA on the verification Within KENDA, each ensemble member will produce its own feedback file, containing all observation types. As input for VERIF, the ‘stat’ facility of KENDA will probably put these feedback files together into one feedback file which then contains all the (analysis and forecast) model values of all ensemble members from 1 experiment (for a 3-hour period). This file may be split up again according to (sets of) observation types (for smaller file size). (It has be evaluated whether it makes sense to combine feedback files from several experiments into 1 feedback file which can be input to VERIF.) Envisaged NetCDF feedback file input from KENDA :

requirements from KENDA on the verification NetCDF feedback file sizes : assuming 25 radars with full (!) volumn scans every 10 minutes: –Radar full volumns 18*360*125*18*25 = 4 * GB600 GB –Radar superobs 13*120*13*18*25 = 9 * GB 15 GB File size for 1 experiment on COSMO-DE domain for a 3-hour period (12 UTC): # obsdet. file size full ens. file size –TEMP22 * (15*5 + 39*3) = 5 * MB 8 MB –WPROF 60 * 30 * 2 = 4 * MB 6 MB –AMDAR 700*3 = 2 * MB 3 MB –ACARS 2000*3 = 6 * MB 10 MB –SYNOP 3000 * (5 to 20) = 6 * 10 4 (1 – ) 4 MB100 MB Observation report Header: neglected Observation report Body: 4 float + 1 int + 3 short + 3 byte  about 30 bytes per obs Verification (model) data: 1 float = 4 bytes per model run and obs # model runs total bytes per obs –deterministic: about bytes –full ensemble: about bytes –main ensemble info: about 20 – – 230 bytes