Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center,

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

Mei Xu, Jamie Wolff and Michelle Harrold National Center for Atmospheric Research (NCAR) Research Applications Laboratory (RAL) and Developmental Testbed.
5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Object Based Cluster-Analysis and Verification of a Convection-Allowing Ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment Aaron.
Storm Prediction Center Highlights NCEP Production Suite Review December 3, 2013 Steven Weiss, Israel Jirak, Chris Melick, Andy Dean, Patrick Marsh, and.
Jess Charba Fred Samplatsky Phil Shafer Meteorological Development Laboratory National Weather Service, NOAA Updated September 06, 2013 LAMP Convection.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
The 2014 Multi-Radar/Multi-Sensor (MRMS) HWT-Hydro Testbed Experiment
The Influence of Basin Size on Effective Flash Flood Guidance
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
Testbeds and Projects with Ongoing Ensemble Research:  Hydrometeorology Testbed (HMT)  Hazardous Weather Testbed (HWT)  Hurricane Forecast Improvement.
Evaluation and Comparison of Multiple Convection-Allowing Ensembles Examined in Recent HWT Spring Forecasting Experiments Israel Jirak, Steve Weiss, and.
Jamie Wolff Jeff Beck, Laurie Carson, Michelle Harrold, Tracy Hertneky 15 April 2015 Assessment of two microphysics schemes in the NOAA Environmental Modeling.
Copyright 2012, University Corporation for Atmospheric Research, all rights reserved Verifying Ensembles & Probability Fcsts with MET Ensemble Stat Tool.
Warn on Forecast Briefing September 2014 Warn on Forecast Brief for NCEP planning NSSL and GSD September 2014.
Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J.
The 2014 Flash Flood and Intense Rainfall Experiment Faye E. Barthold 1,2, Thomas E. Workoff 1,3, Wallace A. Hogsett 1*, J.J. Gourley 4, and David R. Novak.
Forecasting in a Changing Climate Harold E. Brooks NOAA/National Severe Storms Laboratory (Thanks to Andy Dean, Dave Stensrud, Tara Jensen, J J Gourley,
Ed Tollerud, Tara Jensen, Barb Brown ( also Yuejian Zhu, Zoltan Toth, Tony Eckel, Curtis Alexander, Huiling Yuan,…) Module 6 Objective: Provide a portable,
Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander,
GOES-R Proving Ground Activities at the NWS Storm Prediction Center Dr. Russell S. Schneider Chief, Science Support Branch NWS Storm Prediction Center.
Barbara Brown 1, Ed Tollerud 2, and Tara Jensen 1 1 NCAR/RAL, Boulder, CO and DTC 2 NOAA/GSD, Boulder, CO and DTC DET: Testing and Evaluation Plan Wally.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
Collaborating on the Development of Warn-On-Forecast Mike Foster / David Andra WFO Norman OK Feb. 18, 2010 Mike Foster / David Andra WFO Norman OK Feb.
HWT Spring Forecasting Experiment: History and Success Dr. Adam Clark February 25, 2015 National Weather Center Norman, Oklahoma.
Assimilating Reflectivity Observations of Convective Storms into Convection-Permitting NWP Models David Dowell 1, Chris Snyder 2, Bill Skamarock 2 1 Cooperative.
Summary of EMC-DTC Meeting June 18, DTC AOP 2010 Tasks to be continued from 2009: – WRF-NMM + WPP community support (Jamie Wolff) – Reference configurations.
Performance of the Experimental 4.5 km WRF-NMM Model During Recent Severe Weather Outbreaks Steven Weiss, John Kain, David Bright, Matthew Pyle, Zavisa.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
DTC Verification for the HMT Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg.
NSSL’s Warn-on-Forecast Project Dr. Lou Wicker February 25–27, 2015 National Weather Center Norman, Oklahoma.
SPC Ensemble Applications: Current Status, Recent Developments, and Future Plans David Bright Storm Prediction Center Science Support Branch Norman, OK.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
NOAA Hazardous Weather Test Bed (SPC, OUN, NSSL) Objectives – Advance the science of weather forecasting and prediction of severe convective weather –
INFORMATION EXTRACTION AND VERIFICATION OF NUMERICAL WEATHER PREDICTION FOR SEVERE WEATHER FORECASTING Israel Jirak, NOAA/Storm Prediction Center Chris.
Evaluation of radiance data assimilation impact on Rapid Refresh forecast skill for retrospective and real-time experiments Haidao Lin Steve Weygandt Stan.
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
DET Module 5 Products and Display Tara Jensen 1 and Paula McCaslin 2 1 NCAR/RAL, Boulder, CO 2 NOAA/GSD, Boulder, CO Acknowledgements: HWT Spring Experiment.
The SPoRT-WRF: Transitioning SPoRT Modeling Research Sixth Meeting of the Science Advisory Committee 28 February – 1 March, 2012 National Space Science.
NCEP Vision: First Choice – First Alert – Preferred Partner NOAA Hydrometeorological TestBed at the NCEP Hydrometeorological Prediction Center (HPC) Faye.
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg 2, Randy Bullock 2, Gary Wick.
1 Aviation Forecasting – Works in Progress NCVF – Ceiling & Visibility CoSPA – Storm Prediction A Joint Effort Among: MIT Lincoln Laboratory NCAR – National.
CAPS Realtime 4-km Multi-Model Convection-Allowing Ensemble and 1-km Convection-Resolving Forecasts for the NOAA Hazardous Weather Testbed (HWT) 2009 Spring.
Proposed THORPEX/HEPEX Hydrologic Ensemble Project (THEPS) Presentation for 3 rd THORPEX Science Symposium September 14-18, 2009 Prepared by John Schaake,
DTC Overview Bill Kuo September 25, Outlines DTC Charter DTC Management Structure DTC Budget DTC AOP 2010 Processes Proposed new tasks for 2010.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
DET Module 1 Ensemble Configuration Linda Wharton 1, Paula McCaslin 1, Tara Jensen 2 1 NOAA/GSD, Boulder, CO 2 NCAR/RAL, Boulder, CO 3/8/2016.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
The Quantitative Precipitation Forecasting Component of the 2011 NOAA Hazardous Weather Testbed Spring Experiment David Novak 1, Faye Barthold 1,2, Mike.
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
RUC Convective Probability Forecasts using Ensembles and Hourly Assimilation Steve Weygandt Stan Benjamin Forecast Systems Laboratory NOAA.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Estimating Rainfall in Arizona - A Brief Overview of the WSR-88D Precipitation Processing Subsystem Jonathan J. Gourley National Severe Storms Laboratory.
CAPS Realtime 4-km Multi-Model Convection-Allowing Ensemble and 1-km Convection-Resolving Forecasts for the NOAA Hazardous Weather Testbed (HWT) 2009 Spring.
Indirect impact of ozone assimilation using Gridpoint Statistical Interpolation (GSI) data assimilation system for regional applications Kathryn Newman1,2,
A few examples of heavy precipitation forecast Ming Xue Director
Hydrometeorological Predication Center
Center for Analysis and Prediction of Storms (CAPS) Briefing by Ming Xue, Director CAPS is one of the 1st NSF Science and Technology Centers established.
5Developmental Testbed Center
CAPS Real-time Storm-Scale EnKF Data Assimilation and Forecasts for the NOAA Hazardous Weather Testbed Spring Forecasting Experiments: Towards the Goal.
New Developments in Aviation Forecast Guidance from the RUC
Rita Roberts and Jim Wilson National Center for Atmospheric Research
Verification of Tropical Cyclone Forecasts
Presentation transcript:

Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center, Norman, Oklahoma, USA 3 NOAA/OAR/National Severe Storms Laboratory, Norman, Oklahoma, USA

What is HWT? NOAA National Severe Storms Lab (NSSL) NOAA Storm Prediction Center (SPC) Cooperative Institute for Mesoscale Meteorological Studies (CIMMS) BRINGING RESEARCH to FORECAST OPERATIONS The mutual interests of forecasters from the SPC, researchers from NSSL, and collocated joint research partners from CIMMS inspired testbed formation.

What is Spring Experiment? Goal:  Give forecasters first-hand look at the latest research concepts and products  Immerse researchers in the challenges, needs, and constraints of front-line forecasters Approach:  Forecast teams gather in Norman each week from late April to early June.  Each day consists of:  Daily Briefing  Review of Previous Day’s Forecast  Selection of Current Day’s Forecast Area  Forecasters split into 2 teams to predict Chance of Severe Wx between 20 UTC – 04 UTC (two periods UTC; UTC) Years

Spring Experiment – BRINGING RESEARCH to FORECAST OPERATIONS 2008: Demonstration and first on-line system  Goal: Demonstrate use of objective metrics in Spring Experiment format 2009: Expanded evaluation with results in real-time  Goal: Assess impact of radar assimilation on forecast DTC Collaboration with HWT

MET Components

Grid-Stat - Traditional Vx Statistics for Dichotomous Variables Including:  Frequency Bias  Gilbert Skill Score  Critical Success Index  PODy  FAR M H F Observation Forecast MODE – Spatial Vx Once Objects are Identified: Traditional Stats may be calculated Properties of the objects may also be calculated, including: Intersection Area, Area Ratio, Centroid Distance, Angle Difference, Percent Coverage, Median of Maximum Interest, Intensity Quartiles

Results

 Fcst Vars: 1-hr accum. precipitation forecasts  Models: 2 high-resolution models  EMC-WRF 4km (NMM)  NSSL-WRF 4km (ARW)  Obs: NEXRAD Stage II QPE  User Interface: Available toward end of Experiment Traditional stats aggregated by Day, Threshold, Lead Time Spatial stats (MODE output) available for each day  DTC Participation: 2 people attended Experiment for a week

Traditional – Gilbert Skill Score Results were aggregated over Spring Experiment time period and the median value was calculated 0-12 hours NSSL slightly higher skill for lead times 0-12 hours hours Light precip: EMC exhibits slightly greater skill Heavier precip: NSSL model has greater skill Maximum Skill Skill appears to be peak between 8-12 hours for lighter precip and 5-6 hours for heavier precip Gilbert Skill Score (Equitable Threat Score) Measures the fraction of forecast events that were correctly predicted, adjusted for hits associated with random chance

ForecastObserved Fcst: NSSL –ARW f025 1-hr accumulated precipitation Obs: NEXRAD Stage 2 1-hr precipitation estimate Case Study: 11 June 2008 MODE Spatial Scores

 Fcst Vars: Composite Reflectivity; 1-hr accum. precipitation forecasts  Models: 3 high-resolution models  CAPS CN (SSEF 4km ensemble member - ARW core – radar assimilation)  CAPS C0 (SSEF 4km ensemble member - ARW core – no radar assimilation)  HRRR 3km – (ARW core - radar assimilation)  Obs: NSSL-NMQ Q2 QPE and Composite Reflectivity Products  User Interface: Tailored around HWT specifications and displays Trad. and Spatial Statistics available for individual forecast runs MODE graphical output place into a multi-panel looped display  DTC Participation: 1 person on-site each week; provided short tutorial on MET and how to interpret results

Prototype Database and Display System System developed for HWT collaboration 1. Pulls in files 2. Runs MET using pre-defined configurations 3. Loads database with MET output 4. Generates static graphics for website 5. Prototype Interactive Evaluation Tool in development Forecast and Obs Run MET Grid-Stat MODE Database of MET output Static Graphics Display Prototype Interactive Display

14 May 2009 Init: 00 UTC MODE - Radius: 5 (20km); Thresh: 30dBZ

2009 Preliminary Results from Grid-Stat Gilbert Skill Score F00-F03: Assimilation – clear improved skill during f00-f03 even though skill is decreasing over this period. F04 and beyond – skill trends for both models are similar regardless of initialization, suggesting model physics dominates. This is consistent with the idea that it takes 5-6 hours to spin up a model from cold start.

Overall: The Objective Verification provided by HWT/DTC collaboration has a been a very positive addition to the Spring Experiment process Preliminary Results: Over 36 hours - There is no “clear winner” between EMC-4km and NSSL-4km. It appears each model excels during different parts of that forecast cycle 2009 Preliminary Results : Radar assimilation appears to improve skill scores in the first few hours, however it provides diminishing returns on improvement after this. No radar assimilation forecast closes the skill gap between hours 4-6, supporting the subjective evaluation that it takes 4-6 hours for a model to spin up from a cold start. Summary