Presentation is loading. Please wait.

Presentation is loading. Please wait.

Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J.

Similar presentations


Presentation on theme: "Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J."— Presentation transcript:

1 Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J. Levit 3, Michelle Harrold 1, Lisa Coco 1, Patrick Marsh 4, Adam Clark 4, Fanyou Kong 5, Kevin Thomas 5, Ming Xue 5, Jack Kain 4, Russell Schneider 2, Mike Coniglio 4, and Barbara Brown 1 1 NCAR/Research Applications Laboratory (RAL), Boulder, Colorado 2 NOAA/Storm Prediction Center (SPC), Norman, Oklahoma 3 NOAA/Aviation Weather Center (AWC), Kansas City, Missouri 4 NOAA/National Severe Storms Laboratory (NSSL), Norman, Oklahoma 5 Center for Analysis and Prediction of Storms (CAPS), University of Oklahoma, Norman, Oklahoma

2 NOAA Testbeds NOAA Testbeds Funded by: NOAA, USWRP, AFWA, NCAR Bridge between Research And Operations Community Code Support Testing and Evaluation Verification Research NOAA/ ESRL/ GSD NCAR/ RAL/ JNT Distributed Facility with 23 staff members at either NOAA/ESRL and NCAR/RAL and 2 staff at NOAA/NCEP

3 HWT-DTC Collaboration Objectives Supplementsubjective assessments objective evaluation Supplement HWT Spring Experiment subjective assessments with objective evaluation of experimental forecasts contributed to Spring Experiment Expose the forecastersto both traditional and new approaches for verifying Expose the forecasters and researchers to both traditional and new approaches for verifying forecasts Further DTC Mission of Testing and Evaluation of cutting edge NWP Further DTC Mission of Testing and Evaluation of cutting edge NWP for R2O.

4 2010 Models CAPS Storm-Scale Ensemble – 4km (all 26 members plus products) CAPS deterministic – 1 km SREF Ensemble Products – 32-35 km NAM – 12 km HRRR – 3 km NSSL – 4 km MMM – 3 km NAM high-res window – 4km 2/3 CONUS VORTEX2 DAILY Region Of Interest (Moved Daily) Obs were NSSL Q2 data

5 General Approach for Objective Evaluation of Contributed Research Models MODELS OBS REGIONS DTC Model Evaluation Tools (MET) Web Spatial* Statistics Output Traditional Statistics Output *Spatial = Object Oriented

6 Statistics and Attributes calculated using MET Traditional (Categorical)Object-Oriented from MODE Gilbert Skill Score (GSS - aka ETS) Critical Success Index (CSI - aka Threat Score) Frequency Bias Prob. of Detection (POD) False Alarm Ratio (FAR) Centroid Distance Area Ratio Angle Difference Intensity Percentiles Intersection Area Boundary Distance between matched forecast and observed object pairs Etc…

7 HWT 2010 Spring Experiment AviationQPFSevere Probability of Severe: Winds Hail Tornadoes Probability of Extreme: 0.5 inches in 6hrs 1.0 inches in 6 hrs Max accumulation Probability of Convection: Echos > 40 dBZ Echo Top Height >25 kFt, >35 kFt REFC 20, 25, 30, 35, 40, 50, 60 dBZ APCP and Prob. 0.5, 1.0, 2,0 inches In 3h and 6h RETOP 25, 30, 35, 40, 45 kFT Evaluation: Traditional and Spatial Evaluation: Traditional and Spatial Evaluation: Traditional and Spatial

8 Preliminary Results

9 Caveats 25 samples of 00z runs– not quite enough to assign statistical significance Aggregations: Represent the median of the 25 samples (17 May – 18 Jun 2010) Generated using alpha version of METviewer database and display system

10 5/14/2010 Object Definition

11 5/14/2010 Use of Attributes of Objects defined by MODE Centroid Distance: Provides a quantitative sense of spatial displacement of cloud complex. Small is good Forecast Field Observed Field Axis Angle: Provides an objective measure of linear orientation. Small is good Area Ratio: Provides an objective measure of whether there is an over- or under- prediction of areal extent of cloud. Close to 1 is good Obs Area Fcst Area Area Ratio = Fcst Area Obs Area

12 5/14/2010 Symmetric Diff: May be a good summary statistic for how well Forecast and Observed objects match. Small is good Forecast Field Observed Field P50/P90 Int: Provides objective measures of Median (50 th percentile) and near-Peak (90 th percentile) intensities found in objects. Ratio close To 1 is good Total Interest: Summary statistic derived from fuzzy logic engine with user-defined Interest Maps for all these attributes plus some others. Close to 1 is good Symmetric Difference: Non-Intersecting Area Fcst P50 = 29.0 P90 = 33.4 Obs P50 = 26.6 P90 = 31.5 Total Interest 0.75 Use of Attributes of Objects defined by MODE

13 Example: Radar Echo Tops 1 hr forecast valid 9 June 2010 – 01 UTC NSSL Q2 ObservedHRRRCAPS MeanCAPS 1km RETOP Observed Objects Matched Object 1 Matched Object 2 Unmatched Object

14 Example: Radar Echo Tops 1 hr forecast valid 9 June 2010 – 01 UTC NSSL Q2 ObservedHRRRCAPS MeanCAPS 1km RETOP Observed Objects Matched Object 1 Matched Object 2 Unmatched Object

15 Example: Radar Echo Tops 1 hr forecast valid 9 June 2010 – 01 UTC NSSL Q2 ObservedHRRRCAPS MeanCAPS 1km RETOP Centroid Distance: Angle Diff: Area Ratio: Symmetric Diff: P50 Ratio: Total Interest: 27.06 km 1.56 1.17 1372 gs 4.13 1.00 24.56 km 5.83 deg 2.77 2962 gs 4.13 0.93 30.52 km 5.87 deg 2.48 2735 gs 4.13 0.94

16 Example: Radar Echo Tops Ensemble Mean not always so useful RETOP ObservedCAPS MeanThompsonWSM6WDM6Morrison

17 CAPS Ensemble Mean CAPS 1 km Model CAPS SSEF ARW-CN (control w/ radar assimilation) 3 km HRRR 12km NAM CAPS SSEF ARW-C0 (control w/o radar assimilation) Traditional Stats – GSS (aka ETS)

18 CAPS Ensemble Mean CAPS 1 km Model CAPS SSEF ARW-CN (control w/ radar assimilation) 3 km HRRR 12km NAM CAPS SSEF ARW-C0 (control w/o radar assimilation) Traditional Stats – Freq. Bias

19 MODE Attributes – Area Ratio

20 MODE Attributes – Symmetric Diff

21 Summary 30 models and 4 ensemble products evaluated during HWT 2010 Most models had reflectivity as a variable 3 models had Radar Echo Top as a variable (HRRR, CAPS Ensemble, CAPS 1km) All models appears to over predict RETOP areal coverage by at least a factor of 2-5 based on FBIAS and a factor of 5-10 based on MODE Area Ratio Based on some Traditional and Object-Oriented Metrics: HRRR appears to have a slight edge over CAPS simulations for RETOP during the 2010 Spring Experiment but the differences are not statistically significant The Ensemble post-processing technique (seen in Ensemble Mean) seems to inflate the over-prediction of areal extent of cloud shield to a non-useful level. Additional Evaluation of Probability of Exceeding 40 dBZ is planned for later this winter.

22 Thank You s … Questions? Support for the Developmental Testbed Center (DTC), is provided by NOAA, AFWA NCAR and NSF Support for the Developmental Testbed Center (DTC), is provided by NOAA, AFWA NCAR and NSF Evaluation: http://verif.rap.ucar.edu/hwt/2010 MET: http://www.dtcenter.org/met Email: jensen@ucar.edu DTC would like to thank all of the AWC participants who helped improve our evaluation through their comments and suggestions.


Download ppt "Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J."

Similar presentations


Ads by Google