© Crown copyright Met Office From the global to the km-scale: Recent progress with the integration of new verification methods into operations Marion Mittermaier.

Slides:



Advertisements
Similar presentations
Slide 1ECMWF forecast User Meeting -- Reading, June 2006 Verification of weather parameters Anna Ghelli, ECMWF.
Advertisements

Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
Robin Hogan Ewan OConnor University of Reading, UK What is the half-life of a cloud forecast?
Introduction to data assimilation in meteorology Pierre Brousseau, Ludovic Auger ATMO 08,Alghero, september 2008.
ECMWF long range forecast systems
Tim Smyth and Jamie Shutler Assessment of analysis and forecast skill Assessment using satellite data.
Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
Description and Preliminary Evaluation of the Expanded UW S hort R ange E nsemble F orecast System Maj. Tony Eckel, USAF University of Washington Atmospheric.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
Performance of the MOGREPS Regional Ensemble
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
COSMO General Meeting – Moscow Sept 2010 Some results from operational verification in Italy Angela Celozzi - Federico Grazzini Massimo Milelli -
ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities.
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
© Crown copyright Met Office Climate Projections for West Africa Andrew Hartley, Met Office: PARCC national workshop on climate information and species.
WWOSC 2014, Aug 16 – 21, Montreal 1 Impact of initial ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
“High resolution ensemble analysis: linking correlations and spread to physical processes ” S. Dey, R. Plant, N. Roberts and S. Migliorini Mesoscale group.
“High resolution ensemble analysis: linking correlations and spread to physical processes ” S. Dey, R. Plant, N. Roberts and S. Migliorini NWP 4: Probabilistic.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
© Crown copyright Met Office Plans for Met Office contribution to SMOS+STORM Evolution James Cotton & Pete Francis, Satellite Applications, Met Office,
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Combining CMORPH with Gauge Analysis over
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
Sensitivity experiments with the Runge Kutta time integration scheme Lucio TORRISI CNMCA – Pratica di Mare (Rome)
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
© Crown copyright Met Office Data Assimilation Developments at the Met Office Recent operational changes, and plans Andrew Lorenc, DAOS, Montreal, August.
Lennart Bengtsson ESSC, Uni. Reading THORPEX Conference December 2004 Predictability and predictive skill of weather systems and atmospheric flow patterns.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Exploring Multi-Model Ensemble Performance in Extratropical Cyclones over Eastern North America and the Western Atlantic Ocean Nathan Korfe and Brian A.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
Verification of ensemble precipitation forecasts using the TIGGE dataset Laurence J. Wilson Environment Canada Anna Ghelli ECMWF GIFS-TIGGE Meeting, Feb.
Trials of a 1km Version of the Unified Model for Short Range Forecasting of Convective Events Humphrey Lean, Susan Ballard, Peter Clark, Mark Dixon, Zhihong.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
© Crown copyright Met Office Verifying modelled currents using a threshold exceedance approach Dr Ray Mahdon An exploration of the Gerrity Skill Score.
Verification methods - towards a user oriented verification The verification group.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
© Crown copyright Met Office Review topic – Impact of High-Resolution Data Assimilation Bruce Macpherson, Christoph Schraff, Claude Fischer EWGLAM, 2009.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
LEPS VERIFICATION ON MAP CASES
Plans for Met Office contribution to SMOS+STORM Evolution
Fuzzy verification using the Fractions Skill Score
Systematic timing errors in km-scale NWP precipitation forecasts
Convective Scale Modelling Humphrey Lean et. al
CMEMS R&D KICK-OFF MEETING
Challenge: High resolution models need high resolution observations
Observation uncertainty in verification
Quantitative verification of cloud fraction forecasts
Assimilation of Global Positioning System Radio Occultation Observations Using an Ensemble Filter in Atmospheric Prediction Models Hui Liu, Jefferey Anderson,
Comparison of different combinations of ensemble-based and variational data assimilation approaches for deterministic NWP Mark Buehner Data Assimilation.
Deterministic (HRES) and ensemble (ENS) verification scores
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Some Verification Highlights and Issues in Precipitation Verification
MOGREPS developments and TIGGE
CMEMS R&D MID-TERM MEETING
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

© Crown copyright Met Office From the global to the km-scale: Recent progress with the integration of new verification methods into operations Marion Mittermaier with contributions from Rachel North and Randy Bullock (NCAR)

© Crown copyright Met Office Outline 1.Using feature-based verification for global model evaluation during trialling 2.Implementation of the HiRA framework for km-scale NWP

© Crown copyright Met Office Synoptic evolution: Feature-based assessment of global model forecasts with Rachel North

© Crown copyright Met Office Forecast evolution …. … determined by features and how features affect one another…

© Crown copyright Met Office MODE – Method for Object-based Diagnostic Evaluation Davis et al., MWR, 2006 Two parameters: 1.Convolution radius 2.Threshold Highly configurable Attributes: Centroid difference, Angle difference, Area ratio etc Focus is on spatial properties, especially the spatial biases

© Crown copyright Met Office Setting the scene Comparing the previous operational dynamical core (New Dynamics) to the now operational dynamical core (ENDgame) (as of 15 July 2014) Baseline GA3.1 ND (25 km) for NWP (this was the operational GM version) Assessment of GA 5.0#99.13 N512 EG (25 km) and N768 (17 km) NH winter and NH summer trials. Comparison to an independent ECMWF analysis: N512 ND with N768 EG (current and future operational configurations) N512 EG with N512 ND (impact of changed dynamical core at the same resolution) Considering: 250 hPa jet cores, mslp lows and mslp highs.

© Crown copyright Met Office GA3.1 Temporal evolution Older N320 trial 250 hPa winds > 60 m/s at forecast lead time of t+96h from the 12Z initialisation compared to EC analyses Differences in the size of forecast and analysed objects is not overshadowed by growth of synoptic forecast error, i.e. still able to find matches.

© Crown copyright Met Office Foci for ENDgame assessment Spatial biases – extent of features Changes in intensity – deeper, stronger, higher etc Changes in the number of analysed and forecast objects – hits, false alarms, misses Changes in the attribute distributions – are the forecast attribute distributions closer to perfection?

© Crown copyright Met Office Object-based spatial frequency bias NHW NHS EC analyses too large getting smaller neutral LowsHighsJets too small getting larger too small getting smaller ND neutral -ve EG neutral +ve NHW NHS NG neutral -ve EG neutral +ve

© Crown copyright Met Office Object intensities EC analyses N768 EG v N512 ND Do not look at absolute min/max values in objects. Use the 10 th or 90 th percentile as a more reliable estimate of how the intensity distribution has shifted/changed. Lows are deeper, highs and jets are stronger  sharper gradients and a more active energetic model. Differences in the 00Z and 12Z analyses. 10 th percentile 90 th percentile EG deeper LowsHighs Jets 90 th percentile EG stronger

© Crown copyright Met Office Jets Highs Lows Number of objects Lows  EG more matched objects  Some increase in false alarms with fewer misses  Larger matched and unmatched areas Highs  EG more matched objects  Substantially more false alarms at early lead times with misses steady; impact of diurnal pressure bias  Area of matched objects improved at later lead times Jets  EG more matched objects  Substantial increase in false alarms with comparable misses  Modest increase in matched areas, but substantial increase in unmatched area EC analyses

© Crown copyright Met Office Distribution Difference Skill Score DDSS of attribute A is: F and G are the binned test and control distributions (m bins) H is the perfect distribution of attribute A (Heaviside) Measures the relative improvement in the distribution of an attribute compared to perfect attribute distribution. We know what the perfect attribution looks like. e.g. limited area ratio, intersection-over-union ratio e.g. centroid difference, angle difference EC analyses

© Crown copyright Met Office Distribution Difference Skill Score LowsHighsJets LowsHighs Jets Position errorOverlap ExtentPosition error Rotation error EG better N768 better EG better Mixed EC analyses

© Crown copyright Met Office Tropical depressions below 1005 mb Over two trials, June-Aug 2012, Oct-Nov 2012 CIs depend on attribute but are often very wide. Noisy. Verdict: sample size probably an issue in some cases. EC analyses

© Crown copyright Met Office Conclusions Lows deeper and often larger in extent Highs are too intense and too small – tightening of pressure gradients. Jets stronger and slightly too large (too small before). Generally shows as too strong w.r.t. verifying analysis. Significant shifts in frequency bias (i.t.o. spatial extent) with a negative impact on false alarm rate. Compelling seasonal differences in the Northern Hemisphere which may have to do with the land-atmosphere boundary. Pressure biases in the analyses can be very influential for any threshold-based method.

© Crown copyright Met Office Conclusions (cont’d) Analysis has added a new dimension to the traditional root-mean- square method of assessing global NWP performance. The features that drive our synoptic weather patterns are linked and analysing them as such provides potentially useful information for understanding and resolving model deficiencies. Feature-based output is also a lot closer to the way in which forecasters (meteorologists) use and interpret model output on a daily basis and should provide more meaningful objective guidance for forecasting applications.

© Crown copyright Met Office The High Resolution Assessment framework: Comparing ensemble and deterministic performance

© Crown copyright Met Office Low Small uncertainty at large scales = large uncertainty at small scales 5% error at 1000 km = 100% error at 50 km Link to larger scale: Russell et al Hanley et al. 2011, 2012 Justifies the use of a downscaling ensemble (MOGREPS-UK)

© Crown copyright Met Office 3 x 3 Spatial sampling 7 x 7 17 x 17 Only ~ km grid points in > domain used to assess entire forecast! Note the variability in the neighbourhoods. Represents a fundamental departure from our current verification system strategy where the emphasis is on extracting the nearest GP or bilinear interpolation to get matched forecast-ob pair. Make use of spatial verification methods which compare single observations to a forecast neighbourhood around the observation location.  SO-NF Forecast neighbourhood Observation x NOT upscaling/ smoothing!

© Crown copyright Met Office High Resolution Assessment (HiRA) framework Use standard synoptic observations and a range of neighbourhood sizes Use 24h persisted observations as reference The method needs to be able to compare:  Deterministic vs deterministic (different resolutions, and test vs control of the same resolution)  Deterministic vs EPS  EPS vs EPS  Test whether differences are statistically significant (Wilcoxon) [“s” denotes significant at 5%]  Grid scale calculated for reference  NOT main focus. Mittermaier 2014, WAF. VariableOldNew Temp RMSESSCRPSS Vector wind (wind speed) RMSVESSRPSS Cloud cover ETSBSS CBH ETSBSS Visibility ETSBSS 1h precip ETSBSS RMS(V)ESS = Root Mean Square (Vector) Error Skill Score ETS = Equitable Threat Score BSS = Brier Skill Score RPSS = Ranked Probability Skill Score CRPSS = Continuous Ranked Probability Skill Score MAE = Mean Absolute Error PC = Proportion Correct MAE grid scale Ready for operational trialling Jan 2015

© Crown copyright Met Office Deterministic vs EPS 1 st 5 weeks of 03Z MOGREPS-UK +ve = MOGREPS-UK ensemble better “none” = 12 nearest GP values MOGREPS-UK vs 1 nearest GP UKV 2.2 km 1.5 km Benefit of ensemble Need neighbourhood for convective precip neighbourhood greater benefit for UKV

© Crown copyright Met Office Skill against persistence 2.2 km 1.5 km

© Crown copyright Met Office Conclusions New verification framework illustrates benefit of km-scale ensemble over deterministic. Bigger neighbourhoods will improve forecast skill (for the most part) but the UKV needs (and benefits more from) neighbourhood processing, i.e. better “harvesting” of information content. The ensemble is more reliable to begin with and requires a smaller neighbourhood to achieve reliability. There is a trade-off between resolution and reliability as a function of neighbourhood size, and therefore simple neighbourhood processing for the ensemble is possibly not optimal.

© Crown copyright Met Office Questions? Mittermaier MP, 2014: A strategy for verifying near-convection-resolving forecasts at observing sites. Wea. Forecasting. 29(2), Mittermaier M.P., 2014: Quantifying the difference in MODE object attribute distributions for comparing different NWP configurations. In internal review. Mittermaier M.P., R. North and A. Semple, 2014: Feature-based diagnostic assessment of global NWP forecasts, in preparation for QJRMS.