Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA.

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

Mei Xu, Jamie Wolff and Michelle Harrold National Center for Atmospheric Research (NCAR) Research Applications Laboratory (RAL) and Developmental Testbed.
5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
Object Based Cluster-Analysis and Verification of a Convection-Allowing Ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment Aaron.
Coupled NMM-CALMET Meteorology Development for the CALPUFF Air Dispersion Modelling in Complex Terrain and Shoreline Settings Presented at: European Geoscience.
Venugopal, Basu, and Foufoula-Georgiou, 2005: New metric for comparing precipitation patterns… Verification methods reading group April 4, 2008 D. Ahijevych.
Method for Object-based Diagnostic Evaluation (MODE) Fake forecasts.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
NOAA/NWS Change to WRF 13 June What’s Happening? WRF replaces the eta as the NAM –NAM is the North American Mesoscale “timeslot” or “Model Run”
WRF Verification: New Methods of Evaluating Rainfall Prediction Chris Davis NCAR (MMM/RAP) Collaborators: Dave Ahijevych, Mike Baldwin, Barb Brown, Randy.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Verification Methods for High Resolution Model Forecasts Barbara Brown NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
The Effect of the Terrain on Monsoon Convection in the Himalayan Region Socorro Medina 1, Robert Houze 1, Anil Kumar 2,3 and Dev Niyogi 3 Conference on.
Nesting. Eta Model Hybrid and Eta Coordinates ground MSL ground Pressure domain Sigma domain  = 0  = 1  = 1 Ptop  = 0.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center,
© Crown copyright Met Office From the global to the km-scale: Recent progress with the integration of new verification methods into operations Marion Mittermaier.
Towards the Usage of Post-processed Operational Ensemble Fire Weather Indices over the Northeast United States Michael Erickson 1, Brian A. Colle 1, and.
Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen 1*, Steve Weiss 2, Jason J.
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
The National Environmental Agency of Georgia L. Megrelidze, N. Kutaladze, Kh. Kokosadze NWP Local Area Models’ Failure in Simulation of Eastern Invasion.
Earth Science Division National Aeronautics and Space Administration 18 January 2007 Paper 5A.4: Slide 1 American Meteorological Society 21 st Conference.
Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander,
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
SEASONAL COMMON PLOT SCORES A DRIANO R ASPANTI P ERFORMANCE DIAGRAM BY M.S T ESINI Sibiu - Cosmo General Meeting 2-5 September 2013.
Event-based Verification and Evaluation of NWS Gridded Products: The EVENT Tool Missy Petty Forecast Impact and Quality Assessment Section NOAA/ESRL/GSD.
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Higher Resolution Operational Models. Operational Mesoscale Model History Early: LFM, NGM (history) Eta (mainly history) MM5: Still used by some, but.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
VALIDATION OF HIGH RESOLUTION PRECIPITATION PRODUCTS IN THE SOUTH OF BRAZIL WITH A DENSE GAUGE NETWORK AND WEATHER RADARS – FIRST RESULTS Cesar Beneti,
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
Performance of the Experimental 4.5 km WRF-NMM Model During Recent Severe Weather Outbreaks Steven Weiss, John Kain, David Bright, Matthew Pyle, Zavisa.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
DTC Verification for the HMT Edward Tollerud 1, Tara Jensen 2, John Halley Gotway 2, Huiling Yuan 1,3, Wally Clark 4, Ellen Sukovich 4, Paul Oldenburg.
A QPE Product with Blended Gage Observations and High-Resolution WRF Ensemble Model Output: Comparison with Analyses and Verification during the HMT-ARB.
Use of Mesoscale Ensemble Weather Predictions to Improve Short-Term Precipitation and Hydrological Forecasts Michael Erickson 1, Brian A. Colle 1, Jeffrey.
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
An Examination of “Parallel” and “Transition” Severe Weather/Flash Flood Events Kyle J. Pallozzi and Lance F. Bosart Department of Atmospheric and Environmental.
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
Do the NAM and GFS have displacement biases in their MCS forecasts? Charles Yost Russ Schumacher Department of Atmospheric Sciences Texas A&M University.
Spatial Forecast Methods Inter-Comparison Project -- ICP Spring 2008 Workshop NCAR Foothills Laboratory Boulder, Colorado.
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
Trials of a 1km Version of the Unified Model for Short Range Forecasting of Convective Events Humphrey Lean, Susan Ballard, Peter Clark, Mark Dixon, Zhihong.
August 6, 2001Presented to MIT/LL The LAPS “hot start” Initializing mesoscale forecast models with active cloud and precipitation processes Paul Schultz.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
Nowcasting Convection Fusing 0-6 hour observation- and model-based probability forecasts WWRP Symposium on Nowcasting and Very Short Range Forecasting.
User-Focused Verification Barbara Brown* NCAR July 2006
1 Application of MET for the Verification of the NWP Cloud and Precipitation Products using A-Train Satellite Observations Paul A. Kucera, Courtney Weeks,
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
A few examples of heavy precipitation forecast Ming Xue Director
Fuzzy verification using the Fractions Skill Score
Systematic timing errors in km-scale NWP precipitation forecasts
West Virginia Floods June 2016 NROW 2016 Albany NY
5Developmental Testbed Center
General framework for features-based verification
Verification of nowcasting products: Issues and methods
Numerical Weather Prediction Center (NWPC), Beijing, China
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Presentation transcript:

Diagnostic Evaluation of Mesoscale Models Chris Davis, Barbara Brown, Randy Bullock and Daran Rife NCAR Boulder, Colorado, USA

What is Diagnostic Verification? (Quantification of model error and understanding the sources) Cleverly focused applications of standard statistics Subjective impressions from many forecasts Event or feature-based evaluation Reruns of models with changes based on hypotheses derived from one of the above Process involves multiple variables, disparate observations and, in general, is iterative and only sometimes conclusive. The more informative the verification results, the more “efficient” the subsequent work will be.

Comparison of Rainfall Forecasts Hourly rainfall in hundredths of inches Stage 2 gauge+radar NCEP WRF  x=4.5 kmNCAR WRF  x=4.0 km

Event, Object, Entity, or Feature-based Evaluation Time series Spatial structure Time and space Distributions of event attributes without regard to matching (the model’s climate) Scores and distributions of attributes of matched objects

CSI = 0 for first 4; CSI > 0 for the 5th OF OF OFO F F O Consider forecasts and observations of some dichotomous field on a grid: Critical Success Index CSI=YY/(YY+NY+YN) Equitable Threat Score ETS=(YY-  )/(YY+NY+YN-  ), where  =success due to chance YYYN NY Fcst ObsMatch NN Traditional “Measures”-Based Approach Non-diagnostic and utra-sensitive to small errors in simulation of localized phenomena!

 Study Domain: United States, Rocky Mountains (west) to Appalachian Mountains (east)  Purpose: Evaluate 2 cores of the Weather Research and Forecasting (WRF) model using object-based verification methods  Advanced Research WRF (ARW), 4-km grid spacing  Nonhydrostatic Mesoscale Model (NMM), 4.5-km grid spacing  Time Period: 18 April – 4 June, 2005  30-h forecasts initialized at 00 UTC from Eta initial condition  Data: Hourly accumulated precipitation from NCEP – Stage IV on 4-km grid  Method: MODE object identification and attribute definition  Examine statistics of unmatched objects  Perform merging and matching: compare stats of matched objects Kain, J. S., S. J. Weiss, M. E. Baldwin, G. W. Carbin, D. Bright, J. J. Levit, and J. A. Hart, 2005: Evaluating high-resolution configurations of the WRF model that are used to forecast severe convective weather: The 2005 SPC/NSSL Spring Experiment. 17th Conference on Numerical Weather Prediction. American Meteorological Society, Paper 2A.5 Data, Models and Method

Objects in Time Time Series of east-west 10-m wind component MM5,  x=3.3 km, WSMR  Amplitude, duration and timing  Distribution of temporal changes  Matching objects in time

Diurnal Timing of Wind Changes Zonal Wind time Zonal Wind time SunriseSunset

Intensity (percentile value) Area (# grid points > T) Centroid Axis angle (rel. to E-W) Aspect ratio (W/L) Fractional Area (A/(WL)) W L Raw Forecast Steps:  Convolution (disk of radius 5 grid points)  Thresholding: Rainfall > T (1.25 mm/h)  Compute geometric attributes  Restore precip values inside object, examine distribution (box and whisker plot) Spatial Objects and Their Attributes

Time-Space Diagrams Object Distributions 22-km EH July-Aug. 2001

Merging and Matching Merging of objects in forecast and observed fields (done separately for each)  Based entirely on separation of object centroids (Less than min(400 km, W 1 + W 2 )  Area, length and width of merged areas = sum of objects merged  Position = weighted average of objects merged (weighting by area): Matching of forecast and observed objects  Similar criteria for merging, except threshold is min(200 km, W 1 + W 2 )

Attributes: Fractional Area (top panel): Fraction of the minimum bounding rectangle that an object occupies Aspect ratio (bottom panel): W/L Abscissa: object size = square root of object area, expressed as number of grid cells and as kilometers.  Objects too narrow Insufficient stratiform precip? Response to frontal forcing? Results from SPC Data, April-June 2005

A Closer Look

More Examples

Error Distributions Both models produce areas that are too large NMM has more large errors

22-km vs. 4-km Smaller bias for light rain at 4 km Opposite bias for heavy rain 4-km 22-km R f /R o - 1

Objects in Three Dimensions Lon Lat Time (x,y,t)

Lon Lat Time

2-D Slices of 3-D Objects Y=300 (1200 km) Y=200 (800 km)

Probabilistic Forecasts

Conclusions  Models make rain areas too narrow; lack of stratiform rain?  Significant positive bias in size of rain areas in both models, larger for NMM  Too much heavy rain. Rainfall distributions too broad.  CSI for matching lowest in the afternoon, slightly higher for ARW.  Not enough moderate (stratiform) rainfall  Object definitions generalizable to 3-D. Timing and propagation errors can be assessed Fewer objects to compare