Oct. 12, 2005 1 National Severe Storms Laboratory & University of Oklahoma Information briefing to the NEXRAD Technical Advisory.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

Applications of one-class classification
Robin Hogan Ewan OConnor University of Reading, UK What is the half-life of a cloud forecast?
Keith Brewster Radar Assimilation Workshop National Weather Center 18-Oct-2011.
Dec. 13, Quality Control of Weather Radar Data National Severe Storms Laboratory & University.
Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
CLUTTER MITIGATION DECISION (CMD) THEORY AND PROBLEM DIAGNOSIS
NEXRAD TAC Norman, OK March 21-22, 2006 Clutter Mitigation Decision (CMD) system Status and Demonstration Studies Mike Dixon, Cathy Kessinger, and John.
Echo Tops Fairly accurate at depicting height of storm tops Inaccurate data close to radar because there is no beam angle high enough to see tops. Often.
Maximum Covariance Analysis Canonical Correlation Analysis.
Radar-Derived Rainfall Estimation Presented by D.-J. Seo 1 Hydrologic Science and Modeling Branch Hydrology Laboratory National Weather Service Presented.
Preparing for a “Change” in Severe Hail Warning Criteria in 2010 Brian J. Frugis NWS WFO Albany, NY NROW XI November 4-5, 2009.
Where (not) to measure rainfall Neil I. Fox University of Missouri - Columbia.
16/06/20151 Validating the AVHRR Cloud Top Temperature and Height product using weather radar data COST 722 Expert Meeting Sauli Joro.
Ensemble Learning: An Introduction
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
slide 1 German Aerospace CenterMicrowaves and Radar Institute Extraction of clear-air wind Dr. Thomas Börner DLR Oberpfaffenhofen.
Data Integration: Assessing the Value and Significance of New Observations and Products John Williams, NCAR Haig Iskenderian, MIT LL NASA Applied Sciences.
earthobs.nr.no Land cover classification of cloud- and snow-contaminated multi-temporal high-resolution satellite images Arnt-Børre Salberg and.
Scott W. Powell and Stacy R. Brodzik University of Washington, Seattle, WA An Improved Algorithm for Radar-derived Classification of Convective and Stratiform.
Time Series Data Analysis - I Yaji Sripada. Dept. of Computing Science, University of Aberdeen2 In this lecture you learn What are Time Series? How to.
National Weather Service Dual-Polarization Radar Technology Photo courtesy of NSSL.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
IMPROVING VERY-SHORT-TERM STORM PREDICTIONS BY ASSIMILATING RADAR AND SATELLITE DATA INTO A MESOSCALE NWP MODEL Allen Zhao 1, John Cook 1, Qin Xu 2, and.
COMET HYDROMET Enhancements to PPS Build 10 (Nov. 1998) –Terrain Following Hybrid Scan –Graphical Hybrid Scan –Adaptable parameters appended to.
National Lab for Remote Sensing and Nowcasting Dual Polarization Radar and Rainfall Nowcasting by Mark Alliksaar.
Technical Interchange Meeting Spring 2008: Status and Accomplishments.
Ensemble Methods: Bagging and Boosting
Combining CMORPH with Gauge Analysis over
Radar Quality Control and Quantitative Precipitation Estimation Intercomparison Project Status Paul Joe Environment Canada Commission of Instruments, Methods.
Real-time Doppler Wind Quality Control and Analysis Qin Xu National Severe Storms Laboratory/NOAA Pengfei Zhang, Shun Liu, and Liping Liu CIMMS/University.
Experiments in 1-6 h Forecasting of Convective Storms Using Radar Extrapolation and Numerical Weather Prediction Acknowledgements Mei Xu - MM5 Morris Weisman.
Doppler Weather Radar Algorithms
NEXRAD TAC Meeting August 22-24, 2000 Norman, OK AP Clutter Mitigation Scheme Cathy Kessinger Scott Ellis Joseph VanAndel
EVALUATION OF THE RADAR PRECIPITATION MEASUREMENT ACCURACY USING RAIN GAUGE DATA Aurel Apostu Mariana Bogdan Coralia Dreve Silvia Radulescu.
Technical Interchange Meeting – ROC / NSSL / NCAR ROC / NSSL / NCAR TIM Boulder CO 11 May 2005 Real-time time-series implementation of the Radar Echo Classifier.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Yield Cleaning Software and Techniques OFPE Meeting
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
METR February Radar Products More Radar Background Precipitation Mode: -Volume Coverage Patterns (VCP) 21: 9 elevation angles with a complete.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS
Locally Optimized Precipitation Detection over Land Grant Petty Atmospheric and Oceanic Sciences University of Wisconsin - Madison.
National Weather Association 31 st Annual Meeting 17 October 2006 Cleveland, Ohio Kevin Scharfenberg University of Oklahoma Cooperative Institute for Mesoscale.
WSR-88D PRECIPITATION ESTIMATION FOR HYDROLOGIC APPLICATIONS DENNIS A. MILLER.
1 Validation for CRR (PGE05) NWC SAF PAR Workshop October 2005 Madrid, Spain A. Rodríguez.
NEXRAD Data Quality 25 August 2000 Briefing Boulder, CO Cathy Kessinger Scott Ellis Joe VanAndel Don Ferraro Jeff Keeler.
1 GOES-R AWG Product Validation Tool Development Hydrology Application Team Bob Kuligowski (STAR)
Regression Analysis: A statistical procedure used to find relations among a set of variables B. Klinkenberg G
Bump Hunting The objective PRIM algorithm Beam search References: Feelders, A.J. (2002). Rule induction by bump hunting. In J. Meij (Ed.), Dealing with.
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Estimating Rainfall in Arizona - A Brief Overview of the WSR-88D Precipitation Processing Subsystem Jonathan J. Gourley National Severe Storms Laboratory.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Application of Probability Density Function - Optimal Interpolation in Hourly Gauge-Satellite Merged Precipitation Analysis over China Yan Shen, Yang Pan,
Travis Smith U. Of Oklahoma & National Severe Storms Laboratory Severe Convection and Climate Workshop 14 Mar 2013 The Multi-Year Reanalysis of Remotely.
CAPS Radar QC and Remapping
Paper Review Jennie Bukowski ATS APR-2017
GOES-R Risk Reduction Research on Satellite-Derived Overshooting Tops
Cezar Kongoli1, Huan Meng2, Jun Dong1, Ralph Ferraro2,
NEXRAD Data Quality Optimization AP Clutter Mitigation Scheme
High resolution radar data and products over the Continental United States National Severe Storms Laboratory Norman OK, USA.
An overview of real-time radar data handling and compression
A Real-Time Learning Technique to Predict Cloud-To-Ground Lightning
A Neural Network for Detecting and Diagnosing Tornadic Circulations
A New Approach to Tornado Warning Guidance Algorithms
Presentation transcript:

Oct. 12, National Severe Storms Laboratory & University of Oklahoma Information briefing to the NEXRAD Technical Advisory Committee, San Diego, CA Quality Control Neural Network

Oct. 12, Quality control Goal: clean up radar reflectivity data AP/GC contamination Biological targets Terrain effects Sun-strobes Radar interference Test patterns Returns in clear air

Oct. 12, Performance target Challenge: Errors are additive. QC errors degrade quality of down-stream applications Especially for algorithms that accumulate data over space/time If a QC algorithm that is correct 99% of the time is used: A national mosaic will have incorrect data somewhere 73% of the time 130 radars 0.99^130 = 0.27 A three-hour accumulation of precipitation (single radar) will be incorrect at any given range-gate 30% of the time 36 frames (assuming 5 minute volume scans) 0.99^36 = 0.70 Assuming the QC errors are independent from time-step to time-step Performance target Keep 99.9% of good echoes (POD=0.999) 99.9% of echoes that remain should be good (FAR=0.001) Errors in previous example will be 12% and 4% if we can get 99.9% correct.

Oct. 12, Existing quality control methods An extensively studied problem. Thresholding Median filters for speckle removal Vertical tilt tests (Fulton et al, 1998) Echo top and texture features (Steiner & Smith 2002) Artifact detection (DQA, Smalley et al, 2004) Texture features on all 3 moments (REC, Kessinger et al, 2003) Drawbacks with existing methods None of them is designed to be “omnibus” Mostly operate like clutter filters -- gate-by-gate Parameters and thresholds chosen through experiment Hard to get 99.9% accuracy without rigorous statistical methods.

Oct. 12, Quality Control Neural Network The QCNN approach is novel in 4 ways Compute local 2D and 3D features Texture features on all three moments. Vertical features on latest (“virtual”) volume Can clean up tilts as they arrive and still utilize vertical features. Identify the crucial / best features Called “feature selection” Determine optimal way to combine the features Called “training” the neural network A non-linear statistical technique called ridge regression Identify regions of echo and classify them Called “segmentation” Not range-gate by range-gate Reduces random errors

Oct. 12, Human truthing Looked at loops Examined radar data Other sensors Considered terrain, time of day, etc. Identified bad echoes

Oct. 12, Feature selection We considered 60+ features Included texture, image processing, tilt-test features suggested by various researchers. Experimented with different definitions echo-top to include tilts above 1-degree only. Tilt-test based on tilt at 3km height Echo tops based on elevation angle or physical height Removed the features one at a time If cross-entropy on validation set remained unchanged, leave that feature out permanently Ended up with 28.

Oct. 12, Input features (final 28) Lowest velocity scan Value, local mean, local variance, difference, minimum variance Lowest spectrum width scan: value Lowest reflectivity scan Mean, variance, difference, variance along radials, difference along radials, minimum variance, SPIN, inflections along radial Second lowest reflectivity scan Mean, variance, difference, variance along radials, difference along radials, minimum variance In virtual reflectivity volume Vertical maximum, weighted average, vertical difference, echo top, ht of maximum, echo size, in-bound distance to 3km echo top, out- bound distance to zero velocity Actually TWO neural networks One for gates with velocity data The other for gates with missing/range-folded velocity data

Oct. 12, Pre-processing The NN is presented with “hard” and “significant” cases only. Pre-classify “obvious” cases Echo-top above 3 km Velocity gates with exactly zero velocity Reduces chance of sub-optimal optimization. Also remove bad radials or test patterns Not local features NN is trained on local features only Mark some range-gates as “don’t care” Range-gates near the edges of an echo Local statistics are poor in such regions

Oct. 12, Spatial post-processing A range-gate by range-gate classification is subject to random errors This can cause severe degradation in data quality Need to perform quality control on echo regions Identify echo regions (“blobs”) Using segmentation (Lakshmanan et. al 2003) Average the NN classification on these blobs Blob stays in or blob goes out.

Oct. 12, Effect of pre & post processing Effect of Pre-processing (don’t care) NN random error Effect of post processing QC’ed Original

Oct. 12, Our REC comparison We used the operational REC (ORPG Build 8) We wanted to compare our research algorithm against the operational one. Could have also used the DQA The operational REC is only for AP But we compared on all types of QC problems. Our goal was a omnibus QC algorithm So, our numbers should not be used to gage how well the REC performs Such a comparison would use only AP cases

Oct. 12, Targeting a QC application Can improve QC performance by targeting Targeting can be by region, season, reflectivity values, etc. Example of seasonal targeting Require blobs to have at least 0 dBZ in winter or 20 dBZ in summer. Very useful in removing clear-air return. Targeting limits search space for optimization: “If you don’t need to be broad, you can be deep”. Implications of such targeting: Need to provide both edited and unedited data streams. End-user can perform more targeted QC if needed. But targeting the type of echo is not a good idea Should not have separate algorithms for AP, insects, artifacts etc. What users want is an “edited” data stream So, the user will use rules of thumb to combine The QC algorithm should take care of combining in an optimal way.

Oct. 12, How to assess skill? Use other radars/sensors Dual-pol hydromet classifier Kessinger et. al (REC) Rain gages Robinson et. al (2001) Limited by bias, coverage and availability of the other sensor We scored against human-edited data On independent test cases.

Oct. 12, Test case: AP + precipitation Near-perfect correlation of QCNN field with human truthing. REC is mostly correct Random errors within precip Not all non-precip removed REC field QCNN field Original

Oct. 12, Hardware test pattern Perfect correlation of QCNN with human truthing. The REC was not designed for this QCNN field REC field Original

Oct. 12, Extreme biological contamination QCNN had never encountered this kind of data during training. The REC was not designed for this Multi-sensor QC is useful Use satellite/surface observations Don’t have statistics yet REC field QCNN field Original

Oct. 12, Impact of QCNN on bloom … Bloom is a common problem Not addressed by any operational QC algorithm Left: Echoes removed by the QCNN. Clear-air echoes have meteorological value But nuisance for many downstream automated applications McGrath et. al (2002) found that most MDA false alarms occur in clear-air situations. Mazur et. al, (2003) found that using the QCNN, 92% of MDA false alarms in clear-air could be removed without impacting the probability of detection. Stationary echoes impact motion estimates. Another reason to provide both edited and unedited data streams

Oct. 12, Performance Assessment Radar-only QCNN with no seasonal targeting POD: Probability of detection of “good” echo (fraction of good echo retained) FAR: Fraction of echoes in final product that are “bad” CSI: Critical success index HSS: Heidke skill score Effect on severe weather algorithms 99.9 to 100% Effect on precip algorithms 99.9 to 100% Visual quality 95 to 97%

Oct. 12, Summary QC errors, for algorithms that accumulate in space/time, are additive. So the QC algorithm has to be near-perfect. The QCNN approach is “evidence-based”: A timely (virtual volume) method for computing features A formal method of selecting features An optimization procedure (ridge regression) to combine these features Classify regions of echo, not range-gate by range-gate QCNN is an omnibus algorithm Designed to handle AP/GC, radar test patterns, interference, sun strobes, clear air return In shallow precipitation, strong convection, snow from all over the CONUS Constantly adding new training cases (both good and bad echoes) Better performance possible if the algorithm is targeted or if we use multi-sensor The approach can be adapted to ORDA Collect enough training cases for both good and bad data Possibly identify new features for new problems observed Technical details of algorithm Paper submitted to J. Applied Meteorology

Oct. 12, Supplementary slides Quality Control Neural Network

Oct. 12, Spatial post-processing … Spatial post-processing can itself cause problems Here, AP embedded inside precipitation is not removed because of spatial post- processing. A vector segmentation approach might help here. Random errors, but mostly correct Post processed Field

Oct. 12, Radar-only QC Strong convection Shallow precipitation Bad data (bloom)

Oct. 12, Cloud cover (T_surface – T_IR) Cold cloud tops Shallow clouds not seen on satellite No clouds

Oct. 12, Multi-sensor QC Good data wrongly remove d Good data correctl y retained Bad data correctl y remove d