Presentation is loading. Please wait.

Presentation is loading. Please wait.

Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.

Similar presentations


Presentation on theme: "Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb."— Presentation transcript:

1 Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb 22-24, 2010

2 February 17, 20162 Outline Data sources for precipitation TIGGE precipitation verification project Discussion of data requirements for surface verification at points

3 February 17, 20163 Data sources for Precipitation observations Gauge-based GTS data from NCEP or NCDC – may have formatting issues GCOS – too long a delay to get the data 20 year+ GTS data from ECMWF – best for global coverage May have political issues Regional higher resolution datasets: allow upscaling/ better estimates of gridbox averages European dataset (Anna Ghelli compiles it) SHEF in US – contains mesonet data Others Remotely sensed: good horizontal coverage, but may be poorer temporal coverage – may be model-dependent US radar-based analysis 4km NCAR ground-truthed combined analysis – satellite estimates Hydro estimator (UKMO) Strategy for verification: Do it regionally with high resolution datasets where available; do it globally using ECMWF archive

4 February 17, 20164 Precipitation verification project General goal: To start to fill the gap in surface verification of TIGGE To answer the TIGGE question in a way relevant to user community other than modelers Specific Goal: to verify global 24h precipitation forecasts from all the ensembles in the TIGGE archive and combinations One region at a time, using highest density observations Canada and Europe so far Methodology Cherubini et al upscaling, verify only where data available Single station, nearest gridpoint where data is sparser Kernel density fitting following Peel and Wilson to look at extremes of distributions.

5 February 17, 20165 Precipitation verification project : methodology - Europe Upscaling: 1x1 gridboxes, limit of model resolution Average obs over grid boxes, at least 9 stns per grid box (Europe data) Verify only where enough data Matches obs and model resolution locally Answers questions about the quality of the forecasts within the capabilities of the model Most likely users are modelers.

6 February 17, 20166 Reliability – Summer 08- Europe 114 h

7 February 17, 20167 ROC – Summer 08 – Europe – 114 h

8 February 17, 20168 Precipitation verification project: methodology - Canada Single station verification Canadian verification over 20 widely-spaced stations, only one station per gridbox; comparison of nearest gridpoint fcst to obs Pointwise verification, does not (we cannot) upscale properly because don’t have the data density necessary. Valid nevertheless as absolute verification of model predictions

9 February 17, 20169 Results – Canada – ROC Curves – 144h

10 February 17, 201610 Verification of TIGGE forecasts with respect to surface observations – next steps Combined ensemble verification Other regions – Southern Africa should be next. – Non-GTS data is available Use of spatial verification methods THEN maybe we will know the answer to the TIGGE question.

11 February 17, 201611 Summary – Verification of SWFDP products ProductWho should verify General method NMC severe weather warnings NMCContingency tables and scores RSMC severe weather guidance charts RSMCGraphical contingency table Risk tables and prob tablesRSMCAs probabilities – Reliability tables and ROC; Brier score and skill UM SA12 products (sfc variables) RSMCMulticategory contingency tables (pcpn and wind) – vs sfc data where available (hydro estim?) Aladin La ReunionRSMC La Reunion Multicategory CT (pcpn and wind) vs sfc data where available (hydro estim?) SAWS EPSSAWSProbability verification – Reliability and ROC, Brier score and skill vs stn data as available

12 February 17, 201612 Summary – Products to be verified ProductWho should verify General method ECMWF, NCEP, UKMO deterministic models ECMWF, NCEP, UKMO resp. Continuous scores (temperature); contingency tables (precip, wind) ECMWF, NCEP, UKMO EPSECMWF, NCEP, UKMO resp. Scores for ensemble pdfs; scores for probability forecasts with respect to relevant categories. Verification should be done with respect to surface observations where they are available, including those not on the GTS Quality control of observations important, should not use models Verification should be done at the issuing location, to avoid transfer of large data volumes Observation data should be made available to RSMC and Global centers for verification purposes. (ECMWF initiative for GTS data) Example: Request for ECMWF data from Tanzania

13 February 17, 201613 Discussion How to make the TIGGE archive more user-friendly for pointwise applications of the data: Should be able to increase user community this way Important, especially for precip verification and projects Parallel archive for a subset of variables – surface? Go back two years (to oct 07) to reorganize the archive? Starting to get requests of this type, e.g. Tanzania Other ways?

14 February 17, 201614 www.ec.gc.ca

15 February 17, 201615 Issues for TIGGE verification Use of analyses as truth – advantage of one’s own model. Alternatives: Each own analysis Analyses as ensemble (weighted or not) Random selection from all analyses Use “best” analysis; eliminate the related model from comparison Average analysis (may have different statistical characteristics) Model-independent analysis (restricted to data – rich areas, but that is where verification might be most important for most users Problem goes away for verification against observations (as long as they are not qc’d with respect to any model)

16 February 17, 201616 Park et al study – impact of analysis used as truth in verification

17 February 17, 201617 Issues for TIGGE Verification (cont’d) Bias adjustment/calibration Reason: to eliminate “artifical” spread in combined ensemble arising from systematic differences in component models First (mean) and second (spread) moments Several studies have/are being undertaken Results on benefits not conclusive so far Due to too small sample for bias estimation? Alternative: Rather than correcting bias, eliminate inter- ensemble component of bias and spread variation.

18 February 17, 201618 ROC – Winter 07-08 – Europe – 42h

19 February 17, 201619 ROC – Winter 07-08 – Europe – 114h

20 February 17, 201620 Reliability – Winter 07-08 – Europe – 42h

21 February 17, 201621 Results – Canada – Brier Skill, Resolution and Reliability


Download ppt "Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb."

Similar presentations


Ads by Google