CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS

Slides:



Advertisements
Similar presentations
ounding nalog etrieval ystem Ryan Jewell Storm Prediction Center Norman, OK SARS Sounding Analog Retrieval System.
Advertisements

ENHANCEMENTS OF THE NCAR AUTO-NOWCAST SYSTEM BY USING ASAP AND NRL SATELLITE PRODUCTS Huaqing Cai, Rita Roberts, Cindy Mueller and Tom Saxen National Center.
A Spatial Climatology of Convection in the Northeast U.S. John Murray and Brian A. Colle National Weather Service, WFO New York NY Stony Brook University,
Convective Initiation Studies at UW-CIMSS K. Bedka (SSAI/NASA LaRC), W. Feltz (UW-CIMSS), J. Sieglaff (UW-CIMSS), L. Cronce (UW-CIMSS) Objectives Develop.
Hail diagnosis from radar + NSE
PROVIDING DISTRIBUTED FORECASTS OF PRECIPITATION USING A STATISTICAL NOWCAST SCHEME Neil I. Fox and Chris K. Wikle University of Missouri- Columbia.
Science Meeting-1 Lin 12/17/09 MIT Lincoln Laboratory Prediction of Weather Impacts on Air Traffic Through Flow Constrained Areas AMS Seattle Yi-Hsin Lin.
GOES-R Proving Ground NOAA’s Hazardous Weather Testbed Chris Siewert GOES-R Proving Ground Liaison OU-CIMMS / Storm Prediction Center.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Roll or Arcus Cloud Supercell Thunderstorms.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Roll or Arcus Cloud Squall Lines.
Motivation Many GOES products are not directly used in NWP but may help in diagnosing problems in forecasted fields. One example is the GOES cloud classification.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Data Integration: Assessing the Value and Significance of New Observations and Products John Williams, NCAR Haig Iskenderian, MIT LL NASA Applied Sciences.
Scott W. Powell and Stacy R. Brodzik University of Washington, Seattle, WA An Improved Algorithm for Radar-derived Classification of Convective and Stratiform.
1 GOES-R AWG Hydrology Algorithm Team: Rainfall Probability June 14, 2011 Presented By: Bob Kuligowski NOAA/NESDIS/STAR.
Ensemble Numerical Prediction of the 4 May 2007 Greensburg, Kansas Tornadic Supercell using EnKF Radar Data Assimilation Dr. Daniel T. Dawson II NRC Postdoc,
LMD/IPSL 1 Ahmedabad Megha-Tropique Meeting October 2005 Combination of MSG and TRMM for precipitation estimation over Africa (AMMA project experience)
Nowcasting thunderstorms in complex cases using radar data Alessandro Hering* Stéphane Sénési # Paolo Ambrosetti* Isabelle Bernard-Bouissières # *MeteoSwiss.
THE GOES-R GLM LIGHTNING JUMP ALGORITHM (LJA): RESEARCH TO OPERATIONAL ALGORITHM Elise V. Schultz 1, C. J. Schultz 1,2, L. D. Carey 1, D. J. Cecil 2, G.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
1- Near-Optimized Filtered Forecasts (NOFF) using wavelet analysis (O-MAPLE) 2- Probabilistic MAPLE (Probability of rain occurrence at different thresholds)
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Experiments in 1-6 h Forecasting of Convective Storms Using Radar Extrapolation and Numerical Weather Prediction Acknowledgements Mei Xu - MM5 Morris Weisman.
Use of radar data in ALADIN Marián Jurašek Slovak Hydrometeorological Institute.
The Benefit of Improved GOES Products in the NWS Forecast Offices Greg Mandt National Weather Service Director of the Office of Climate, Water, and Weather.
BLAST: Basic Local Alignment Search Tool Altschul et al. J. Mol Bio CS 466 Saurabh Sinha.
Data assimilation, short-term forecast, and forecasting error
USING THE ROSSBY RADIUS OF DEFORMATION AS A FORECASTING TOOL FOR TROPICAL CYCLOGENESIS USING THE ROSSBY RADIUS OF DEFORMATION AS A FORECASTING TOOL FOR.
Storm tracking & typing for lightning observations Kristin Calhoun, Don MacGorman, Ben Herzog.
The Rapid Developing Thunderstorm (RDT) product CDOP to CDOP2
An Object-Based Approach for Identifying and Evaluating Convective Initiation Forecast Impact and Quality Assessment Section, NOAA/ESRL/GSD.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
NOAA-MDL Seminar 7 May 2008 Bob Rabin NOAA/National Severe Storms Lab Norman. OK CIMSS University of Wisconsin-Madison Challenges in Remote Sensing to.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
TOULOUSE (FRANCE), 5-9 September 2005 OBJECTIVE VERIFICATION OF A RADAR-BASED OPERATIONAL TOOL FOR IDENTIFICATION OF HAILSTORMS I. San Ambrosio, F. Elizaga.
Presented by: Idan Aharoni
Developers: John Walker, Chris Jewett, John Mecikalski, Lori Schultz Convective Initiation (CI) GOES-R Proxy Algorithm University of Alabama in Huntsville.
1 Validation for CRR (PGE05) NWC SAF PAR Workshop October 2005 Madrid, Spain A. Rodríguez.
“Real data OSSE” EnKF analysis of storm from 6 June 2000 (STEPS) Assimilated observed radar reflectivity and radial velocity. Activated electrification.
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Gridded warning verification Harold E. Brooks NOAA/National Severe Storms Laboratory Norman, Oklahoma
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
UAH 28 Sept 2008R. Boldi NSSTC/UAH 1 Hazardous Cell Tracking Robert Boldi 29 September 2008 NSSTC/UAH.
Gridded WAFS Icing Verification System Matt Strahan WAFC Washintgon.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
Michael Coniglio NSSL Stacey Hitchcock CSU Kent Knopfmeier CIMMS/NSSL 10/20/2015 IMPACT OF ASSIMILATING MPEX MOBILE UPSONDE OBSERVATIONS ON SHORT- TERM.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
STMAS Aviation Weather Testbed (AWT-2011) case: 25 July 2011 Highlight: A line of storms over nw NY at 12z is moving to the southeast with potential to.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Estimating Rainfall in Arizona - A Brief Overview of the WSR-88D Precipitation Processing Subsystem Jonathan J. Gourley National Severe Storms Laboratory.
ASAP Convective Weather Research at NCAR Matthias Steiner and Huaqing Cai Rita Roberts, John Williams, David Ahijevych, Sue Dettling and David Johnson.
Paper Review Jennie Bukowski ATS APR-2017
Procrustes Shape Analysis Verification Tool
NCAR Research on Thunderstorm Analysis & Nowcasting
Ingredients to improve rainfall forecast in very short-range:
High resolution radar data and products over the Continental United States National Severe Storms Laboratory Norman OK, USA.
Nic Wilson’s M.S.P.M. Research
An overview of real-time radar data handling and compression
A Real-Time Learning Technique to Predict Cloud-To-Ground Lightning
Quantitative verification of cloud fraction forecasts
Local features and image matching
Fourier Transform of Boundaries
A New Approach to Tornado Warning Guidance Algorithms
Presentation transcript:

CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS

In short: 1. Find observed CI using radar echoes aloft 2. Compare to CI forecasts from UAH and UW 3. Find hits, misses, false alarms 4. Preliminary results 5. Discussion

From radar data aloft 1. How observed CI was determined

Observed CI  For verification purposes, need a “truth” field  Independent of way in which CI is detected  Not tied to “objects”  Based on multi-radar reflectivity at -10C isotherm  Reflectivity aloft, associated with graupel formation  Good indication on convection  Less contaminated by clutter, biological echoes The multi-radar reflectivity is QC’ed, but QC is not perfect

Reflectivity at -10C on 4/4/2011  Approx. 1km resolution over CONUS

Classifying CI  Define convection as:  Reflectivity at -10C exceeds 35 dBZ  New convection:  Was below 35 dBZ in previous image  Images are 5 minutes apart  Done on a pixel-by-pixel basis  But allow for growth of ongoing convection

Model verification  The CI detection algorithm is now running realtime  Being used to verify NSSL-WRF model forecasts of CI

Aside: model verification  Probability of CI in one hour very similar  But time evolution different

Real time: Image at t0

Real time: Image at t1

Real time: Observed CI

Methodology  Take image at t0 and warp it to align it with the image at t1  Warping limited to a 5 pixel movement  Determined by cross-correlation with a smoothness constraint imposed on it  5 pixels in 5 min  60kmph maximum movement  Then, do a neighborhood search  Pixels above 35 dBZ with no pixel above 35 dBZ within 3km of aligned image is “New Convection”

Example: Image at t0

Example: Image at t1

Example: Image at t0 aligned to t1

Classification

Definition of Observed CI  Computed CI using 4 different distance thresholds:  3 km (as described)  5 km  15 km  25 km  The 15 km threshold means that a new CI pixel would have to be at least 15 km from existing convection to considered new  In the HWT, this is what forecasters tended to like  What I will use for scoring

Significant cells?  One possible problem is that even one pixel counts as CI  So, also tried to look for at least 13 km^2 cells  This will be called ObservedCIv2  Tends to find only significant cells (or cells after they have grown a little bit).  Started doing this after some feedback on this point Not available for all days Can go back and recompute, but doesn’t seem to make much difference to final scores

By finding distance between centroids 2. Comparing Observed to Forecast

Computing distance  Take the ObservedCI, SatCast and UWCI grid points  Find contiguous pixels and call it an object  Find centroid of those objects  Use storm motion derived from radar echoes and model 500mb wind field  Compute distance between each ObservedCI centroid and each forecast CI centroid

Distance computation  Distance is computed as follows:  If observed CI is outside time window of forecast CI (- 15 to +45 min), then dist=MAXDIST  Project forecast CI to time of observed CI Using storm motion field  Compute Euclidean distance in lat-lon degrees  MAXDIST was set to be 100 km  Pretty generous

Two ways: Hungarian match and distance 3. Scoring

Scoring: Hungarian Match  Create cost matrix of distance between each pair  Observed CI to forecast CI  Find best association for each centroid to minimize global sum-of-distances  Any associated pair is a hit  Any unassociated observed CI is a miss  Any unassociated forecast CI is a false alarm

Scoring: Neighborhood Match  Consider each observed CI  If there is any forecast CI within MAXDIST, then it is a hit  Otherwise, it is a miss  Consider each forecast CI  If there is no observed CI within MAXDIST, then it is a miss  More generous than the Hungarian Match  Since multiple forecasts can be verified by a single observation

Summary of numbers that matter  Observed CI:  35 dBZ  5 pixel warp in 5 minutes  15 pixel isolation for new CI  Significant cells area threshold (ObservedCIv2)  13 km^2  Time Window:  -15 min to +45 min  Distance threshold:  Hits have to be within 100 km

Real time images and daily scores 4. Preliminary results

Real time  Can see ObservedCI, ObservedCIv2, UAH and UWCI algorithms at:  adar/civer.shtml adar/civer.shtml

Example

Verification dataset  Dataset of centroids over Spring experiment is available at: ftp://ftp.nssl.noaa.gov/users/lakshman/civerification.tgz  Contains:  All ObservedCI, SatCast and UWCI centroids  ObservedCIv2 for when we started creating them  Results of matching and skill scores by day

Example result for June 10, 2011  UAH  UWCI  These scores are typical

Only significant cells (ObservedCIv2)  UAH  UWCI

5. Discussion

Possible reason for low values  Could be a factor of the cirrus mask  Computing scores without taking the mask into account is problematic  Because mask is so widespread, most radar-based CI happens under the mask