Verification of Warn-on-Forecast Adam J. Clark - CIMMS 9 February 2012 Warn-on-Forecast and High Impact Weather Workshop Stensrud et al. 2009; BAMS.

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

Lightning data assimilation techniques for convective storm forecasting with application to GOES-R Geostationary Lightning Mapper Alexandre Fierro, Blake.
5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble.
Storm Prediction Center Highlights NCEP Production Suite Review December 3, 2013 Steven Weiss, Israel Jirak, Chris Melick, Andy Dean, Patrick Marsh, and.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Jidong Gao and David Stensrud Some OSSEs on Assimilation of Radar Data with a Hybrid 3DVAR/EnKF Method.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Convection-permitting forecasts initialized with continuously-cycling limited-area 3DVAR, EnKF and “hybrid” data assimilation systems Craig Schwartz and.
An Efficient Ensemble Data Assimilation Approach and Tests with Doppler Radar Data Jidong Gao Ming Xue Center for Analysis and Prediction of Storms, University.
ASSIMILATION of RADAR DATA at CONVECTIVE SCALES with the EnKF: PERFECT-MODEL EXPERIMENTS USING WRF / DART Altuğ Aksoy National Center for Atmospheric Research.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
1 st UNSTABLE Science Workshop April 2007 Science Question 3: Science Question 3: Numerical Weather Prediction Aspects of Forecasting Alberta Thunderstorms.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Evaluation and Comparison of Multiple Convection-Allowing Ensembles Examined in Recent HWT Spring Forecasting Experiments Israel Jirak, Steve Weiss, and.
Comparison of hybrid ensemble/4D- Var and 4D-Var within the NAVDAS- AR data assimilation framework The 6th EnKF Workshop May 18th-22nd1 Presenter: David.
Warn-on-Forecast Capabilities and Possible Contributions by CAPS By Ming Xue Center for Analysis and Prediction of Storms and School of Meteorology University.
Storm-scale data assimilation and ensemble forecasting for Warn-on- Forecast 5 th Warn-on-Forecast Workshop Dusty Wheatley 1, Kent Knopfmeier 2, and David.
Warn on Forecast Briefing September 2014 Warn on Forecast Brief for NCEP planning NSSL and GSD September 2014.
Ensemble Numerical Prediction of the 4 May 2007 Greensburg, Kansas Tornadic Supercell using EnKF Radar Data Assimilation Dr. Daniel T. Dawson II NRC Postdoc,
Sensitivity of PBL Parameterization on Ensemble Forecast of Convection Initiation Bryan Burlingame M.S. Graduate Research Assistant University of Wisconsin-Milwaukee.
Forecasting in a Changing Climate Harold E. Brooks NOAA/National Severe Storms Laboratory (Thanks to Andy Dean, Dave Stensrud, Tara Jensen, J J Gourley,
Verifying high-resolution forecasts Advanced Forecasting Techniques Forecast Evaluation and Decision Analysis METR 5803 Guest Lecture: Adam J. Clark 3.
HWT Spring Forecasting Experiment: History and Success Dr. Adam Clark February 25, 2015 National Weather Center Norman, Oklahoma.
10/18/2011 Youngsun Jung and Ming Xue CAPS/OU with help from Tim Supinie.
Assimilating Reflectivity Observations of Convective Storms into Convection-Permitting NWP Models David Dowell 1, Chris Snyder 2, Bill Skamarock 2 1 Cooperative.
Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to.
Performance of the Experimental 4.5 km WRF-NMM Model During Recent Severe Weather Outbreaks Steven Weiss, John Kain, David Bright, Matthew Pyle, Zavisa.
NSSL’s Warn-on-Forecast Project Dr. Lou Wicker February 25–27, 2015 National Weather Center Norman, Oklahoma.
SPC Ensemble Applications: Current Status, Recent Developments, and Future Plans David Bright Storm Prediction Center Science Support Branch Norman, OK.
The EnKF Analyses and Forecasts of the 8 May 2003 Oklahoma City Tornadic Supercell Storm By Nusrat Yussouf 1,2 Edward Mansell 2, Louis Wicker 2, Dustan.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
Impacts of Running-In-Place on LETKF analyses and forecasts of the 24 May 2011 outbreak Impacts of Running-In-Place on LETKF analyses and forecasts of.
NOAA Hazardous Weather Test Bed (SPC, OUN, NSSL) Objectives – Advance the science of weather forecasting and prediction of severe convective weather –
INFORMATION EXTRACTION AND VERIFICATION OF NUMERICAL WEATHER PREDICTION FOR SEVERE WEATHER FORECASTING Israel Jirak, NOAA/Storm Prediction Center Chris.
Welcome to the 2012 Warn-on-Forecast and High Impact Weather Workshop!
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss A more reliable COSMO-LEPS F. Fundel, A. Walser, M. A.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
Jidong Gao, Kristin Kuhlman, Travis Smith, David Stensrud 3DVAR Storm-scale assimilation in real- time.
Alexander Ryzhkov Weather Radar Research Meteorological Applications of Dual-polarization Radar.
Norman Weather Forecast Office Gabe Garfield 2/23/11.
Implementation and Testing of 3DEnVAR and 4DEnVAR Algorithms within the ARPS Data Assimilation Framework Chengsi Liu, Ming Xue, and Rong Kong Center for.
Travis Smith, Jidong Gao, Kristin Calhoun, Darrel Kingfield, Chenghao Fu, David Stensrud, Greg Stumpf & a cast of dozens NSSL / CIMMS Warn-on-Forecast.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
Numerical Simulation and Prediction of Supercell Tornadoes Ming Xue School of Meteorology and Center for Analysis and Prediction of Storms University of.
Trials of a 1km Version of the Unified Model for Short Range Forecasting of Convective Events Humphrey Lean, Susan Ballard, Peter Clark, Mark Dixon, Zhihong.
Diagnostic verification and extremes: 1 st Breakout Discussed the need for toolkit to build beyond current capabilities (e.g., NCEP) Identified (and began.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
The Risks and Rewards of High-Resolution and Ensemble Modeling Systems David Schultz NOAA/National Severe Storms Laboratory Paul Roebber University of.
Assimilation of Pseudo-GLM Observations Into a Storm Scale Numerical Model Using the Ensemble Kalman Filter Blake Allen University of Oklahoma Edward Mansell.
STORM-SCALE DATA ASSIMILATION AND ENSEMBLE FORECASTING WITH THE NSSL EXPERIMENTAL WARN-ON-FORECAST SYSTEM 40 th National Weather Association Annual Meeting.
The Performance of a Weather-Adaptive 3DVAR System for Convective-scale RUA and some suggestions for future GSI-based RUA Jidong Gao NOAA/National Severe.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Assimilating Cloudy Infrared Brightness Temperatures in High-Resolution Numerical Models Using Ensemble Data Assimilation Jason A. Otkin and Rebecca Cintineo.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Travis Smith U. Of Oklahoma & National Severe Storms Laboratory Severe Convection and Climate Workshop 14 Mar 2013 The Multi-Year Reanalysis of Remotely.
Multi-scale Analysis and Prediction of the 8 May 2003 Oklahoma City Tornadic Supercell Storm Assimilating Radar and Surface Network Data using EnKF Ting.
A few examples of heavy precipitation forecast Ming Xue Director
Nigel Roberts Met Reading
Center for Analysis and Prediction of Storms (CAPS) Briefing by Ming Xue, Director CAPS is one of the 1st NSF Science and Technology Centers established.
background error covariance matrices Rescaled EnKF Optimization
CAPS Real-time Storm-Scale EnKF Data Assimilation and Forecasts for the NOAA Hazardous Weather Testbed Spring Forecasting Experiments: Towards the Goal.
Rita Roberts and Jim Wilson National Center for Atmospheric Research
Verification of Tropical Cyclone Forecasts
Dawson, D. T. II, M. Xue, J. A. Milbrandt, and A. Shapiro, 2015
SUPERCELL PREDICTABILITY:
Presentation transcript:

Verification of Warn-on-Forecast Adam J. Clark - CIMMS 9 February 2012 Warn-on-Forecast and High Impact Weather Workshop Stensrud et al. 2009; BAMS

General issues for verifying WoF… To extract useful information, need to go beyond traditional metrics (ETS, bias, Brier Score, ROC curves, etc.) – Traditional scores can give useful info on larger scale environmental fields, but for short- term forecasts of storms, additional methods are needed. For WoF, storm attributes should be verified along with dBZ and Vr – storm size, duration, number, timing of CI, intensity of rotation, length of rotation track… Ensemble characteristics are important: dispersion, reliability, sharpness, spread-error relationships “Scale issues” important to consider – At what scales should verification be performed? – At what scales do the models have skill? – At what scales should probabilistic forecasts be presented?

More issues… What observational datasets will be used for verification? – e.g., Rotation tracks from WDSSII for mesocyclones How to test for statistical significance. Not easy! Need to take into account spatial/temporal autocorrelation of errors. – Resampling (e.g., Hamill 1999) – Field significance (Elmore et al. 2006) What model fields are needed and how frequently should they be output? – Use hourly-max fields to save time/space? Efficient methods to quickly visualize distributions of forecast storm attributes are needed. – For example, a forecaster should be able to quickly assess number of ensemble members that forecast long track and/or intense mesos. Or, whether forecast PDF is bimodal – some members break the cap and some do not.

Non-traditional methods for verifying WoF Neighborhood methods – Consider neighborhood around each grid-point to compute various metrics. Scale separation – Examine spatial error field at different scales using wavelets. SCALE 1SCALE 2SCALE 3SCALE 4SCALE 5

Object-based methods for verifying WoF Objects defined as contiguous regions of observed/model grid-points exceeding predefined threshold. Object attributes like location, size, and shape can be compared. 2-D object-based algorithms have been around for a while – e.g., MODE (Method for Object- based Diagnostic Evaluation; Davis et al. 2006) – and many useful applications have been illustrated. Lack of 3 rd dimension limits ability to track time evolution of objects – time evolution of storms is what we are most interested for WoF (e.g., storm tracks, duration, speed, etc.)! DTC/NCAR will be releasing “MODE-TD” this spring. How easy this will be to use with extremely high resolution forecasts? For preliminary testing, I have developed a 3D object code designed for two different applications. – 1) Measuring track lengths and maximum intensity of simulated rotating storms. – 2) Defining the timing of observed and forecast convective initiation.

Application 1: Visualization/verification of simulated rotating storm track lengths 3-D object code is applied to hourly-max updraft helicity (UH) to identify number, length, and intensity of 3D UH objects (i.e. rotating storm tracks). A study was done on whether total UH path lengths could be used as a proxy for total tornado path lengths (Clark et al. 2012).

Results -For each case, total track length of 3D UH-objects from each SSEF ensemble member was plotted against the total tornado path lengths for corresponding time periods – UH path lengths identified using a threshold of 100 m 2 s -2 are shown here because it worked best. -Portions of tracks from simulated storms that were high-based or elevated, were filtered out. For details… ask later! -The technique work well, but how do we efficiently present information on 3D UH-objects and utilize inherent uncertainty information provided by the ensemble? Filtered and calibrated UH > 100 m 2 /s 2

Example UH Forecast Product: 27 April Max UH from any ensemble member – blue shading scale for elevated/high-based UH, red shading for surface-based UH. -Thick red line – 27 April exceedence probabilities for total tornado path lengths up to 5000 km. -Thin red – exceedence probs for all other days -Green line – climo -Grey vertical lines mark path lengths corresponding to 1, 2, and 10 year return periods -Length of entire row is the total UH path length for an ensemble member; members are ordered longest to shortest. -Lengths of segments correspond to path lengths of individual UH objects. Shading level shows max intensity of UH within each object. Red shading is for surface based and green for elevated UH tracks.

Computing observed UH for 27 April A 3DVAR data assimilation system (Gao et al. 2009) used in the 2011 Experimental Warning Program Spring Experiment was run over a domain covering 27 April outbreak. – WSR-88D reflectivity and velocity data assimilated at 1.25-km grid-spacing every 5 minutes over the period 15Z to 3Z, April – gave 144 separate high-resolution analyses. – 12-km NAM forecast valid at analysis time used as first guess background. – This 3DVAR system designed for identifying mesocyclones – observed UH easily computed using same formulation as in model.

Raw maximum UH: 1500 to April Filtered maximum UH: 1500 to April -Apply 3D object algorithm in sensible way and only plot identified objects… -64 total UH tracks identified. -Longest track was ~ 725 km and 12 tracks were over 300 km long. -Total rotating storm path length of almost km.

Observed UH compared to individual SSEF members

For CI-component of SFE2011, convective activity (CA) was defined as DbZ > 35 at -10 C level. To identify CI events, 3D CA objects can be defined and grid- points within the objects with the earliest time are CI. Additionally, other grid-points that are local time minima within 3D objects are identified as CI. – This allows “merging storms” to have a unique CI event assigned. Application 2: Convective Initiation

Example: CI over Oklahoma 24 May 2011 Observed CI: Many more storms than in ensemble members Microphysics members (MYJ PBL) Control member (Thompson/MYJ) PBL members (Thompson MP)

Verification with polarimetric radar data of an EnKF-analyzed tornadic supercell using single- and double-moment microphysics Experiment Design: Analysis Domain: 1-km grid-spacing 40 veritical levels Assimilation starts at 0005 UTC and is performed every 5 minutes until 0100 UTC KTLX reflectivity and Vr assimilated in EnKF analysis using SM and DM schemes. KOUN dual-pol data used as independent data for verification. – DM scheme: Milbrandt and Yau (2005) – SM scheme: LFO (1983) Jung, Y., M. Xue, and M. Tong, 2012: Ensemble Kalman filter analysis of the May 2004 Oklahoma tornadic thunderstorm using one- and two-moment bulk microphysics, with verification against polarimetric radar data, Mon. Wea. Rev., In press May 2004

Reflectivity (Z H ): SM and DM experiments agree well with obs. Differential reflectivity (Z DR ): DM experiment was able to retrieve the Z DR arc, while SM was not (SM cannot simulate size sorting). Differential phase: The DM experiment was able to reproduce the high K DP region collocated with the region of high Z H, SM was too high. Cross-correlation coefficient: ρ hv reduction around the updraft for the DM, but orientation/placement not right in SM. Reflectivity alone is insufficient for determining the quality of estimated microphysical states because the state dependency of Z H is not unique. Summary

New creative methods to verify and visualize forecasts are needed – For Wof, emphasis should be on verifying storm attributes – 3D object-based methods have promise as illustrated in two applications Future plans: – Further testing of verification/visualization strategies during Spring Forecasting Experiment – Testing of 3D MODE when it comes out. – Verification applications to shorter length WoF simulations – Dan Dawson’s Greensburg case will be starting point Thank you! Questions!

END

NSSL-WRF model climatology – total UH-object path lengths - Tornado paths vs UH paths are not apples to apples comparison, but it is encouraging sign that general characteristics of distributions are similar.

Ensemble 3D UH-object visualization – 10 May 2010 Distribution is bi- modal! NSSL- WRF would have been misleading. CAP broke CAPE did not break

16 April 2011

24 May 2011

Results using 3-D UH objects… -For 48 cases during April-May , the total track length of 3D UH- objects from SSEF ensemble members for forecast hours 12 – 30 was computed and compared to total tornado path lengths for corresponding time periods. -UH threshold of 100m 2 /s 2 worked best – high R 2 and tornado to UH length ratio closest to Biggest problem was that in many cases SSEF system forecast long UH tracks without many observed tornadoes. -So, we “filtered” out portions of tracks from simulated storms that were high- based and storms that were elevated. -“high-based” – HLCL > 1500m -“elevated” – CAPE/MUCAPE < 0.75 Raw UH > 100 m 2 /s 2 Filtered UH > 100 m 2 /s 2 Filtered and calibrated UH > 100 m 2 /s 2

Ensemble CI probabilities and forecasts Observed – 1900 UTCForecast – 1900 UTCObserved – 2100 UTCForecast – 2100 UTC Storm object timing from ensemble. Storms with UH > 100 timing. Forecast hour 2100 UTC Mondrian Slide provided by Jim Correia