Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.

Slides:



Advertisements
Similar presentations
5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Advertisements

Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
DYnamical and Microphysical Evolution of Convective Storms Thorwald Stein, Robin Hogan, John Nicol DYMECS.
A Microwave Retrieval Algorithm of Above-Cloud Electric Fields Michael J. Peterson The University of Utah Chuntao Liu Texas A & M University – Corpus Christi.
Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
WRF Verification: New Methods of Evaluating Rainfall Prediction Chris Davis NCAR (MMM/RAP) Collaborators: Dave Ahijevych, Mike Baldwin, Barb Brown, Randy.
Assessment of Tropical Rainfall Potential (TRaP) forecasts during the Australian tropical cyclone season Beth Ebert BMRC, Melbourne, Australia.
Monitoring the Quality of Operational and Semi-Operational Satellite Precipitation Estimates – The IPWG Validation / Intercomparison Study Beth Ebert Bureau.
Estimation of Rainfall Areal Reduction Factors Using NEXRAD Data Francisco Olivera, Janghwoan Choi and Dongkyun Kim Texas A&M University – Department of.
Using geospatial analysis techniques to investigate the spatial properties of tropical cyclone rain fields Corene J. Matyas Department of Geography, University.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Roll or Arcus Cloud Supercell Thunderstorms.
SA basics Lack of independence for nearby obs
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Object-based Spatial Verification for.
Roll or Arcus Cloud Squall Lines.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Lecture II-2: Probability Review
Automated Real-Time Operational Rain Gauge Quality-Control Tools in NWS Hydrologic Operations Chandra R. Kondragunta 1 and Kiran Shrestha 2 1 Hydrology.
Mesoscale Model Evaluation Mike Baldwin Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma Also affiliated with NOAA/NSSL.
49 COMET Hydrometeorology 00-1 Matt Kelsch Tuesday, 19 October 1999 Radar-Derived Precipitation Part 3 I.Radar Representation of.
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
“High resolution ensemble analysis: linking correlations and spread to physical processes ” S. Dey, R. Plant, N. Roberts and S. Migliorini Mesoscale group.
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
Evaluation of the ability of Numerical Weather Prediction models run in support of IHOP to predict the evolution of Mesoscale Convective Systems Steve.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
On the spatial verification of FROST-2014 precipitation forecast fields Anatoly Muraviev (1), Anastasia Bundel (1), Dmitry Kiktev (1), Nikolay Bocharnikov.
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Outline Background Highlights of NCAR’s R&D efforts A proposed 5-year plan for CWB Final remarks.
Experiences with 0-36 h Explicit Convective Forecasting with the WRF-ARW Model Morris Weisman (Wei Wang, Chris Davis) NCAR/MMM WSN05 September 8, 2005.
Experiments in 1-6 h Forecasting of Convective Storms Using Radar Extrapolation and Numerical Weather Prediction Acknowledgements Mei Xu - MM5 Morris Weisman.
25N 30N 65E75E65E75E65E75E Height (km) 8 Distance (km)
Mark T. Stoelinga University of Washington Thanks to: Steve Koch, NOAA/ESRL/GSD Brad Ferrier, NCEP Verification and Calibration of Simulated Reflectivity.
An Object-Based Approach for Identifying and Evaluating Convective Initiation Forecast Impact and Quality Assessment Section, NOAA/ESRL/GSD.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Typhoon Forecasting and QPF Technique Development in CWB Kuo-Chen Lu Central Weather Bureau.
CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
1 Validation for CRR (PGE05) NWC SAF PAR Workshop October 2005 Madrid, Spain A. Rodríguez.
Ch. Eick: Some Ideas for Task4 Project2 Ideas on Creating Summaries that Characterize Clustering Results Focus: Primary Focus Cluster Summarization (what.
AQUARadar Identification of temporally stable Z-R relationships using measurements of micro-rain radars M. Clemens (1), G. Peters (1), J. Seltmann (2),
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Nowcasting Convection Fusing 0-6 hour observation- and model-based probability forecasts WWRP Symposium on Nowcasting and Very Short Range Forecasting.
11 Short-Range QPF for Flash Flood Prediction and Small Basin Forecasts Prediction Forecasts David Kitzmiller, Yu Zhang, Wanru Wu, Shaorong Wu, Feng Ding.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Estimating Rainfall in Arizona - A Brief Overview of the WSR-88D Precipitation Processing Subsystem Jonathan J. Gourley National Severe Storms Laboratory.
Application of Probability Density Function - Optimal Interpolation in Hourly Gauge-Satellite Merged Precipitation Analysis over China Yan Shen, Yang Pan,
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
A few examples of heavy precipitation forecast Ming Xue Director
Intensity-scale verification technique
Fuzzy verification using the Fractions Skill Score
A dual-polarization QPE method based on the NCAR Particle ID algorithm Description and preliminary results Michael J. Dixon1, J. W. Wilson1, T. M. Weckwerth1,
Procrustes Shape Analysis Verification Tool
Systematic timing errors in km-scale NWP precipitation forecasts
Verifying Precipitation Events Using Composite Statistics
Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR
Multi-scale validation of high resolution precipitation products
General framework for features-based verification
Radar/Surface Quantitative Precipitation Estimation
Composite-based Verification
Nic Wilson’s M.S.P.M. Research
Composite Method Results Artificial Cases April 2008
Hydrologically Relevant Error Metrics for PEHRPP
Presentation transcript:

Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University

References Baldwin et al. (2005): Development of an automated classification procedure for rainfall systems. Mon. Wea. Rev. Baldwin et al. (2006): Challenges in comparing realistic, high-resolution spatial fields from convective-scale grids. Symposium on the Challenges of Severe Convective Storms 2006 AMS Annual Meeting Baldwin et al. (2003): Development of an events-oriented verification system using data mining and image processing algorithms. AMS 2003 Annual Meeting 3rd Conf. Artificial Intelligence

Types of data gridded fields of precipitation or reflectivity GRIB format has been used program expects models and observations to be on the same grid

Basic strategy compare forecast “objects” with observed “objects” objects are described by a set of attributes related to morphology, location, intensity, etc. multi-variate “distance” can be defined to measure differences between “objects” combining all attributes “good” forecasts will have small “distances” “bad” forecasts will have large “distances” errors for specific attributes for pairs of matching fcst/obs objects can be analyzed

Method find areas of contiguous precipitation greater than a threshold expand those areas by ~20% and connect objects that are within 20km of each other characterize objects by location, mean, variance, size, measures of shape compare every forecast object to every observed object valid at the same time

Strengths of approach Attributes related to rainfall intensity and auto- correlation ellipticity were able to classify precipitation systems by morphology (linear/cellular/stratiform) Can define “similar” and “different” objects using as many attributes as deemed important or interesting Allows for event-based verification –categorical –errors for specific classes of precipitation events

Weaknesses of approach not satisfied with threshold-based object ID procedure sensitivity to choice of threshold currently no way to determine extent of “overlap” between objects does not include temporal evolution, just doing snapshots

Weaknesses, continued… Not clear how to “match” fcst & obs objects –how far off does a fcst have to be to be considered a false alarm? How to weigh different attributes? –250km spatial distance same as 5mm precipitation distance? Do attribute distributions matter? –Forecast of heavy rain and observation of heavy rain is a good forecast, even if magnitude off by 50% –50% error in light rain forecast is more significant

Example STAGE II WRF2 CAPS

1 mm threshold STAGE II WRF2 CAPS

5 mm threshold STAGE II WRF2 CAPS

analyze objects > (50 km) 2 WRF2 CAPS STAGE II Area=12700 km 2 mean(ppt)=6.5  ppt  = 14.5 max 25km =0.06  2 25 km = km = 107° lat = 39.6°N lon = 95.1°W Area=55200 km 2 mean(ppt)=8.2  ppt  = 30.6 max 25km =0.54  2 25 km = km = 107° lat = 40.7°N lon = 97.3°W

Example STAGE II WRF2 CAPS Area=12700 km 2 mean(ppt)=6.5  ppt  = 14.5 max 25km =0.06  2 25 km = km = 107° lat = 39.6°N lon = 95.1°W Area=55200 km 2 mean(ppt)=8.2  ppt  = 30.6 max 25km =0.54  2 25 km = km = 107° lat = 40.7°N lon = 97.3°W

analyze objects > (50 km) 2 WRF2 CAPS STAGE II Area=20000 km 2 mean(ppt)=22.6  ppt  = max 25km =0.47  2 25 km = km = 149° lat = 34.3°N lon = 101.3°W Area=60000 km 2 mean(ppt)=9.8  ppt  = 57.7 max 25km =0.43  2 25 km = km = 21° lat = 40.0°N lon = 99.6°W

Example STAGE II WRF2 CAPS Area=20000 km 2 mean(ppt)=22.6  ppt  = max 25km =0.47  2 25 km = km = 149° lat = 34.3°N lon = 101.3°W Area=60000 km 2 mean(ppt)=9.8  ppt  = 57.7 max 25km =0.43  2 25 km = km = 21° lat = 40.0°N lon = 99.6°W

New auto-correlation attributes Replaced ellipticity of AC contours with 2 nd derivative of correlation in vicinity of max corr at specific lags (~25, 50, 75 km every ~10°)

Example Find contiguous regions of reflectivity Expand areas by 15% Connect regions within 20km

Example Analyze objects > 150 points (~3000 km 2 ) Result: 5 objects Next, determine attributes for each object

Attributes Each object is characterized by a vector of attributes, with a wide variety of units, ranges of values, etc. Size: area (number of grid boxes) Location: lat, lon (degrees) Intensity: mean, variance of reflectivity in object Shape: difference between max-min auto-corr at 50, 100, 150 km lags Orientation: angle of max auto-corr at 50, 100, 150 km lags

Object comparison each object is represented by a vector of attributes: x = (x 1, x 2, …,x n ) T similarity/dissimilarity measures – measure the amount of resemblance or distance between two vectors Euclidean distance:

LEPS Distance = 1 equates to difference between “largest” and “smallest” object for a particular attribute Linear for uniform dist (lat, lon,  ) Have to be careful with  L1-norm:  AC diff = 0.4  F o =.08  F o =.47

How to match observed and forecast objects? = false alarm F1F1 O2O2 O3O3 = missed event Objects might “match” more than once… If d i* > d T then false alarm If d *j > d T : missed event …for each observed object, choose closest forecast object d ij = ‘distance’ between F i and O j …for each forecast object, choose closest observed object O1O1 F2F2

Example of object verf Fcst_1 ARW 2km (CAPS)Radar mosaic Obs_2 Fcst_2 Obs_1 Object identification procedure identifies 4 forecast objects and 5 observed objects

Distances between objects Use d T = 4 as threshold Match objects, find false alarms, missed events O_34O_37O_50O_77O_79 F_ F_ F_ F_

 =.07  =.08  =.04  =.22  =.04  = -.07 median position errors matching obs object given a forecast object NMM4 ARW2 ARW4

Average distances for matching fcst and obs objects 1-30h fcsts, 10 May – 03 June 2004 Eta (12km) = 2.12 WRF-CAPS = 1.97 WRF-NCAR = 1.98 WRF-NMM = 2.02

With set of matching obs and fcsts Nachamkin (2004) compositing ideas –errors given fcst event –errors given obs event Distributions of errors for specific attributes Use classification to stratify errors by convective mode

Automated rainfall object identification Contiguous regions of measurable rainfall (similar to Ebert and McBride 2000)

Connected component labeling Pure contiguous rainfall areas result in 34 unique “objects” in this example

Expand areas by 15%, connect regions that are within ~20 km Results in 5 objects

Useful characterization Attributes related to rainfall intensity and auto-correlation ellipticity were able to produce groups of stratiform, cellular, linear rainfall systems in cluster analysis experiments (Baldwin et al. 2005)

New auto-correlation attributes Replaced ellipticity of AC contours with 2 nd derivative of correlation in vicinity of max corr at specific lags (50, 100, 150km, every 10°)

How to measure “distance” between objects How to weigh different attributes? –Is 250km spatial distance same as 5mm precipitation distance? Do attribute distributions matter? –Is 55mm-50mm same as 6mm-1mm? How to standardize attributes? –X'=(x-min)/(max-min) –X'=(x-mean)/  –Linear error in probability space (LEPS)

Estimate of d T threshold Compute distance between each observed object and all others at the same time d T = 25 th percentile = 2.5 Forecasts have similar distributions 25 th %-ile

Fcst_1 NCAR WRF 4km Stage II radar ppt Attributes Area=70000 km 2 mean(dBZ)=0.97  dBZ  = 1.26  corr(50)=1.17  corr(100)=0.99  corr(150)=0.84  =173°  =173°  =173° lat = 40.2°N lon = 92.5°W Fcst_2Obs_1Obs_2 Area=70000 km 2 mean(ppt)=0.60  ppt  = 0.67  corr(50)=0.36  corr(100)=0.52  corr(150)=0.49  =85°  =75°  =65° lat = 44.9°N lon =84.5°W Area= mean(ppt)=0.45  ppt  = 0.57  corr(50)=0.37  corr(100)=0.54  corr(150)=0.58  =171°  =11°  =11° lat = 39.9°N lon = 91.2°W Area= mean(ppt)=0.32  ppt  = 0.44  corr(50)=0.27  corr(100)=0.42  corr(150)=0.48  =95°  =85°  =85° lat = 47.3°N lon = 84.7°W Obs_2 Obs_1