Presentation is loading. Please wait.

Presentation is loading. Please wait.

Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to.

Similar presentations


Presentation on theme: "Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to."— Presentation transcript:

1 requirements from KENDA on the verification christoph.schraff@dwd.de NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to be) included in the LETKF / 3DVAR package) - format is documented in the ‘Feedback File Definition’ Observation Header: incl. status and analysis flags Observation Body: observation value plus meta information for each single (bias corrected) obs value, incl.: –bias correction (corrected minus reported value) –level significance (e.g. TEMPs: surface, standard, significant, max. wind, tropopause) –status (accepted, active, merged, passive, rejected, passive and rejected) –analysis flags (bit flag table with quality check flags, e.g. passive obs type, blacklisted, dataset quality flag, redundant, thinning, gross error, rejected by first guess check, etc.) –quality (e.g. observation confidence) Verification Data section:model analysis & forecast values projected onto observation space (i.e. after applying the obs operator) for ensembles: analysis and forecasts from each ensemble member, but also ensemble mean, spread and potentially other quantities (e.g. position of the obs value within the ensemble of model values, as input for Talagrand diagrams).  NetCDF feedback files contain all information required Data Input : NetCDF feedback files Requirements from KENDA on the verification

2 requirements from KENDA on the verification christoph.schraff@dwd.de For each observation and model run, there will be 1 model-equivalent for the observation value. +In general, the observation operator of the assimilation scheme is used to compute this model-equivalent +The observation quality / analysis flags can be taken into account –There is no posterior flexibility for how to compute the model-equivalent (e.g. use of nearest grid point, or interpolation, or a whole environment of grid points e.g. for cloud cover, etc.) The feedback file format does not (yet ?) contain observation analysis data on the model grid which are typically used e.g. for fuzzy or object oriented verification. The feedback files may not necessarily include the model equivalent for all observations which are non-local in time. Remarks on the use of NetCDF feedback files for verification :

3 requirements from KENDA on the verification christoph.schraff@dwd.de Simple conditional verification: use meta information as conditions. Specifically: –status (priority 1) –analysis flags (priority 2 for blacklist and dataset flags, priority 3 for other flags) –level significance (it should be possible to exclude from TEMP verification the surface (if this is not the standard anyway) or tropopause or significant levels; this issue is not specific for observations from feedback files) (priority 1 for surface level) –quality / observation confidence (priority 2)  (Rather limited number of conditions, probably no need for relational data base) Requirements on verified quantities and applicable conditions: Features not specific to KENDA, but required e.g. for model system development in general (!) : Several feedback files from different experiments must be able to be processed simultaneously for comparison between experiments (or between an experiment and the operational run) (priority 1) It must be made sure that the same set of observations is used for all experiments in a comparison ! (priority 1) –(For this purpose and to allow for full flexibility, the status and analysis flags should be assigned internally to each experiment separately (for each observation), so that it is possible to specify that only observations having set a certain flag or not having set a flag in all (!) experiments should be used in a comparison.)

4 requirements from KENDA on the verification christoph.schraff@dwd.de Need choice for using in verification either bias corrected observation values or original reported observation values (priority 1) For comparison between experiments with different bias corrections (incl. zero bias correction), need choice between two options using bias corrected observation values: –either each experiment is verified against its own observation values (but using the same set of observations) (  standard case) –or the (bias corrected) observation values from one experiment are used to verify the model values from both experiments,  this requires to compute the differences between bias corrections (priority 2, could become priority 1 if bias correction for conventional observations becomes a big issue ) Requirements on verified quantities and applicable conditions: Features not specific to KENDA, but required e.g. for model system development in general (!) : Need ability to plot observations from a single report (e.g. a vertical temperature profile) together with (a subset of) an ensemble of model values (sort of a spaghetti plot), or to plot an ensemble of model minus observation values (like in Uli Pflügers TEMP verification tool) (priority 1)

5 requirements from KENDA on the verification christoph.schraff@dwd.de It must be possible to plot the resulting score of the verification, e.g. RMSE, in the same plot as the ensemble spread (priority 1) In KENDA, the ensemble spread will be typically stored in the NetCDF feedback files. However, (with lower priority) VERIF should also be able to compute quantities such as the ensemble spread by itself from the values of the ensemble members. Requirements on verified quantities and applicable conditions: Features specific to KENDA : Specific ensemble scores include: –Talagrand diagrams –ROC curves / area (hit rate versus false alarm rate, computed for many thresholds) –Reliability (uses bins (threshold), difference betw. forecast and true probabilities) –Brier Skill Score (uses a threshold (or bins), compares to reference, e.g. climatology, persistence) (BSS = 1 – BS/BSref where BS = reliability – resolution + uncertainty) –Rank Probability Skill Score (categorised BSS) (priority 2, except for Talagrand diagrams: priority 1)

6 requirements from KENDA on the verification christoph.schraff@dwd.de Requirements on verified quantities and applicable conditions: Features that may be partly specific to KENDA : Observation types include: –upper-air obs (radiosonde, aircraft, wind profiler / RASS, …) –surface (synoptic, ship, buoy, (scatterometer), … ) : not only surface (station) pressure, 2-m temperature and humidty, 10-m wind, but also (total, low, middle, high) cloud cover, cloud base height, visibility (?), etc. –radar: radial wind, 3-dim. reflectivity –later on: GPS zenith delay ; GPS slant path delay –later on: satellite radiances –later on, possibly: pre-processed cloud analysis data –…

7 requirements from KENDA on the verification christoph.schraff@dwd.de Within KENDA, each ensemble member will produce its own feedback file, containing all observation types. As input for VERIF, the ‘stat’ facility of KENDA will probably put these feedback files together into one feedback file which then contains all the (analysis and forecast) model values of all ensemble members from 1 experiment (for a 3-hour period). This file may be split up again according to (sets of) observation types (for smaller file size). (It has be evaluated whether it makes sense to combine feedback files from several experiments into 1 feedback file which can be input to VERIF.) Envisaged NetCDF feedback file input from KENDA :

8 requirements from KENDA on the verification christoph.schraff@dwd.de NetCDF feedback file sizes : assuming 25 radars with full (!) volumn scans every 10 minutes: –Radar full volumns 18*360*125*18*25 = 4 * 10 8 30 GB600 GB –Radar superobs 13*120*13*18*25 = 9 * 10 6 0.6 GB 15 GB File size for 1 experiment on COSMO-DE domain for a 3-hour period (12 UTC): # obsdet. file size full ens. file size –TEMP22 * (15*5 + 39*3) = 5 * 10 3 0.3 MB 8 MB –WPROF 60 * 30 * 2 = 4 * 10 3 0.3 MB 6 MB –AMDAR 700*3 = 2 * 10 3 0.1 MB 3 MB –ACARS 2000*3 = 6 * 10 3 0.4 MB 10 MB –SYNOP 3000 * (5 to 20) = 6 * 10 4 (1 – ) 4 MB100 MB Observation report Header: neglected Observation report Body: 4 float + 1 int + 3 short + 3 byte  about 30 bytes per obs Verification (model) data: 1 float = 4 bytes per model run and obs # model runs total bytes per obs –deterministic: about 10 70 bytes –full ensemble: about 400 1600 bytes –main ensemble info: about 20 – 50110 – 230 bytes


Download ppt "Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to."

Similar presentations


Ads by Google