Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble.

Slides:



Advertisements
Similar presentations
Xuguang Wang, Xu Lu, Yongzuo Li, Ting Lei
Advertisements

Report of the Q2 Short Range QPF Discussion Group Jon Ahlquist Curtis Marshall John McGinley - lead Dan Petersen D. J. Seo Jean Vieux.
Object Based Cluster-Analysis and Verification of a Convection-Allowing Ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment Aaron.
SPC Potential Products and Services and Attributes of Operational Supporting NWP Probabilistic Outlooks of Tornado, Severe Hail, and Severe Wind.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Statistical Postprocessing of Weather Parameters for a High-Resolution Limited-Area Model Ulrich Damrath Volker Renner Susanne Theis Andreas Hense.
NOAA/NWS Change to WRF 13 June What’s Happening? WRF replaces the eta as the NAM –NAM is the North American Mesoscale “timeslot” or “Model Run”
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Instituting Reforecasting at NCEP/EMC Tom Hamill (ESRL) Yuejian Zhu (EMC) Tom Workoff (WPC) Kathryn Gilbert (MDL) Mike Charles (CPC) Hank Herr (OHD) Trevor.
Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr.
Assimilating Sounding, Surface and Profiler Observations with a WRF-based EnKF for An MCV Case during BAMEX Zhiyong Meng & Fuqing Zhang Texas A&M University.
National Centers for Environmental Prediction (NCEP) Hydrometeorlogical Prediction Center (HPC) Forecast Operations Branch Winter Weather Desk Dan Petersen.
Roll or Arcus Cloud Supercell Thunderstorms.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Weather Research & Forecasting Model (WRF) Stacey Pensgen ESC 452 – Spring ’06.
Introduction to Numerical Weather Prediction and Ensemble Weather Forecasting Tom Hamill NOAA-CIRES Climate Diagnostics Center Boulder, Colorado USA.
Roll or Arcus Cloud Squall Lines.
¿How sensitive are probabilistic precipitation forecasts to the choice of the ensemble generation method and the calibration algorithm? Juan J. Ruiz 1,2.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Evaluation and Comparison of Multiple Convection-Allowing Ensembles Examined in Recent HWT Spring Forecasting Experiments Israel Jirak, Steve Weiss, and.
Using Short Range Ensemble Model Data in National Fire Weather Outlooks Sarah J. Taylor David Bright, Greg Carbin, Phillip Bothwell NWS/Storm Prediction.
LAMEPS Development and Plan of ALADIN-LACE Yong Wang et al. Speaker: Harald Seidl ZAMG, Austria Thanks: Météo France, LACE, NCEP.
WWOSC 2014 Assimilation of 3D radar reflectivity with an Ensemble Kalman Filter on a convection-permitting scale WWOSC 2014 Theresa Bick 1,2,* Silke Trömel.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Warn-on-Forecast Capabilities and Possible Contributions by CAPS By Ming Xue Center for Analysis and Prediction of Storms and School of Meteorology University.
Warn on Forecast Briefing September 2014 Warn on Forecast Brief for NCEP planning NSSL and GSD September 2014.
WWOSC 2014, Aug 16 – 21, Montreal 1 Impact of initial ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model.
ISDA 2014, Feb 24 – 28, Munich 1 Impact of ensemble perturbations provided by convective-scale ensemble data assimilation in the COSMO-DE model Florian.
Integration of Storm Scale Ensembles, Hail Observations, and Machine Learning for Severe Hail Prediction David John Gagne II Center for Analysis and Prediction.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
Outline Background Highlights of NCAR’s R&D efforts A proposed 5-year plan for CWB Final remarks.
Experiences with 0-36 h Explicit Convective Forecasting with the WRF-ARW Model Morris Weisman (Wei Wang, Chris Davis) NCAR/MMM WSN05 September 8, 2005.
WSN05 6 Sep 2005 Toulouse, France Efficient Assimilation of Radar Data at High Resolution for Short-Range Numerical Weather Prediction Keith Brewster,
A Numerical Study of Early Summer Regional Climate and Weather. Zhang, D.-L., W.-Z. Zheng, and Y.-K. Xue, 2003: A Numerical Study of Early Summer Regional.
Mark T. Stoelinga University of Washington Thanks to: Steve Koch, NOAA/ESRL/GSD Brad Ferrier, NCEP Verification and Calibration of Simulated Reflectivity.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss A more reliable COSMO-LEPS F. Fundel, A. Walser, M. A.
Implementation and Testing of 3DEnVAR and 4DEnVAR Algorithms within the ARPS Data Assimilation Framework Chengsi Liu, Ming Xue, and Rong Kong Center for.
Probabilistic Forecasts of Extreme Precipitation Events for the U.S. Hazards Assessment Kenneth Pelman 32 nd Climate Diagnostics Workshop Tallahassee,
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
Deutscher Wetterdienst Preliminary evaluation and verification of the pre-operational COSMO-DE Ensemble Prediction System Susanne Theis Christoph Gebhardt,
CAPS Realtime 4-km Multi-Model Convection-Allowing Ensemble and 1-km Convection-Resolving Forecasts for the NOAA Hazardous Weather Testbed (HWT) 2009 Spring.
Convection-Permitting Ensemble Forecasts at CAPS for Hazardous Weather Testbed (HWT) Ming Xue Center for Analysis and Prediction of Storms and School of.
Fly - Fight - Win 2 d Weather Group Mr. Evan Kuchera HQ AFWA 2 WXG/WEA Template: 28 Feb 06 Approved for Public Release - Distribution Unlimited AFWA Ensemble.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
Global vs mesoscale ATOVS assimilation at the Met Office Global Large obs error (4 K) NESDIS 1B radiances NOAA-15 & 16 HIRS and AMSU thinned to 154 km.
Predicting Intense Precipitation Using Upscaled, High-Resolution Ensemble Forecasts Henrik Feddersen, DMI.
The Quantitative Precipitation Forecasting Component of the 2011 NOAA Hazardous Weather Testbed Spring Experiment David Novak 1, Faye Barthold 1,2, Mike.
CAPS Realtime 4-km Multi-Model Convection-Allowing Ensemble and 1-km Convection-Resolving Forecasts for the NOAA Hazardous Weather Testbed (HWT) 2009 Spring.
Example Radar Data Assimilation Direct analysis of radar radial velocity data using enhanced NCEP GSI-3DVAR Analysis of radar reflectivity via ARPS complex.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
A few examples of heavy precipitation forecast Ming Xue Director
Nigel Roberts Met Reading
Hydrometeorological Predication Center
CAPS is one of the first 11 NSF Science and Technology (S&T) Centers
Center for Analysis and Prediction of Storms (CAPS) Briefing by Ming Xue, Director CAPS is one of the 1st NSF Science and Technology Centers established.
Dan Petersen Bruce Veenhuis Greg Carbin Mark Klein Mike Bodner
Yuanfu Xie, Steve Albers, Hongli Jiang Paul Schultz and ZoltanToth
CAPS Mission Statement
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
CAPS Real-time Storm-Scale EnKF Data Assimilation and Forecasts for the NOAA Hazardous Weather Testbed Spring Forecasting Experiments: Towards the Goal.
WMO NWP Wokshop: Blending Breakout
2018 EnKF Workshop Development and Testing of a High-Resolution Rapid Refresh Ensemble (HRRRE) David Dowell, Trevor Alcott, Curtis Alexander, Jeff Beck,
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Short Range Ensemble Prediction System Verification over Greece
Craig Schwartz, Glen Romine, Ryan Sobash, and Kate Fossell
Presentation transcript:

Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble Aaron Johnson and Xuguang Wang School of Meteorology and Center for Analysis and Prediction of Storms University of Oklahoma, Norman, OK Acknowledgement: F. Kong, M. Xue, K. Thomas, K. Brewster, Y. Wang, J. Gao Warn-on-Forecast and High Impact Weather Workshop 9 February 2012

Outline Motivation and convection-allowing ensemble overview Non-traditional methods of generating probabilistic forecasts Calibration methods Results –Neighborhood based Full ensemble without calibration Full ensemble with calibration Sub-ensembles without calibration Sub-ensembles with calibration –Object based Full ensemble without calibration Full ensemble with calibration Sub-ensembles without calibration Sub-ensembles with calibration 2

Forecast example Hourly accumulated precipitation Near-CONUS domain Subjective impressions of storm structures 3

Motivation Numerous calibration studies for meso- and global-scale ensembles (e.g., Wang and Bishop 2005, Wilks and Hamill 2007, Sloughter et al. 2007) – How do different probabilistic forecast calibrations compare at convection- allowing resolution? Neighborhood methods relax grid point sensitivity of high resolution forecasts (e.g., Ebert 2009) while object based methods retain storm scale features but are typically applied to deterministic forecasts (e.g., Davis et al. 2006, Gallus 2010). – How skillful are such non-traditional probabilistic forecasts before and after calibration? – How to generate probabilistic forecasts at convection-allowing resolution? 2009 CAPS ensemble forecasts for HWT Spring Experiment clustered according to WRF model dynamics (Johnson et al. 2011) – Is multi-model necessary? Is the conclusion changed before and after calibration? 4

20 members initialized 00 UTC, integrated 30 hours over near-CONUS domain on 26 days from 29 April through 5 June 2009, on 4 km grid without CP. Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH 10 members are from WRF-ARW, 8 members from WRF-NMM, and 2 members from ARPS. Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH Initial background field from 00 UTC NCEP NAM analysis. Coarser (~35 km) resolution IC/LBC perturbations obtained from NCEP SREF forecasts Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH Assimilation of radar reflectivity and velocity using ARPS 3DVAR and cloud analysis for 17 members Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH Perturbations to Microphysics, Planetary Boundary Layer, Shortwave Radiation and Land Surface Model physics schemes. Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH 5

Object based probabilistic forecasts Event being forecast: object of interest Probability obtained from: percentage of ensemble members in which the forecast object occurs Methods of Generating Probabilistic Forecasts Neighborhood based probabilistic forecasts –Event being forecast: accumulated precipitation exceeding a threshold –Probability obtained from: percentage of grid points within a search radius (48 km) from all members that exceed the threshold Figure 8 from Schwartz et al. (2010) 6

Definition of Objects 7

Calibration Methods Reliability diagram method: forecast probability is replaced with observed frequency during training Schaffer et al. (2011) method: Extension of the reliability diagram method by including more parameters. Logistic Regression: Neighborhood based x 1 = mean of NP 0.25 x 2 = standard deviation of NP 0.25 Object based x 1 = uncalibrated forecast probability x 2 = ln(area) Bias Adjustment of each member: Adjust values so CDF of forecasts matches observations (Hamill and Whitaker 2006) 8

Neighborhood Results: Uncalibrated Full Ensemble Diurnal cycle for most thresholds Skill also depends on threshold/accumulation period 9

Neighborhood Results: Calibrated Full Ensemble Skill improvement limited to the periods of skill minima During skill minima, similar improvements from all calibrations 10

Neighborhood Results: Uncalibrated Sub-Ensembles ARW significantly more skillful than NMM for almost all lead times and thresholds Multi-Model is not significantly more skillful than ARW 11

Neighborhood Results: Calibrated Sub-Ensembles Differences among different sub-enembles are reduced. Multi-Model only shows advantages at hour lead times. 12

Object Based Results: Full Ensemble Uncalibrated: Skill minimum during first 6 hours when members tend to be too similar (i.e., underdispersive) Lower skill than neighborhood based Lower skill for hourly than 6 hourly Calibrated: Bias adjustment is the least effective and LR is the most effective. 13

Object Based Results: Sub-Ensembles Uncalibrated: ARW significantly more skillful than NMM. Multi-model did not show advantage compared to ARW Calibrated: Again, more skillful after calibration and more skillful for longer accumulation period. Like the neighborhood probabilistic forecasts, differences in skill among different sub-ensembles are reduced by calibration. 14

Conclusions Probabilistic precipitation forecasts from a convection allowing ensemble for the 2009 NOAA HWT Spring Experiment were verified and calibrated. Probabilistic forecasts were derived from both the neighborhood method and a new object based method. Various calibrations including reliability based, logistic regression, and individual member bias correction methods were implemented. For both the neighborhood and the object based probabilistic forecasts, calibration significantly improved the skill of the forecasts compared to the non-calibrated forecast during skill minima. For the neighborhood probabilistic forecasts, skill of different calibrations were similar For the object based probabilistic forecast, the LR method is most effective. Sub-ensembles from ARW and NMM are also verified and calibrated for the purpose of guiding optimal ensemble design. –ARW was more skillful than NMM for both neighborhood and object based probabilistic forecasts –The difference in skill was reduced by calibration –Multimodel ensemble of ARW and NMM members only shows advantages compared to single model ensemble after the 24 hour lead time for the neighborhood based forecasts. 15

Probability of occurrence is forecast for control forecast objects, A and B. Other panels are forecasts from the other members. Forecast probability of A is 1/8=12.5% Forecast probability of B is 7/8=87.5% 16 Example of object based method CONTROL FORECAST AB

Sensitivity of Neighborhood based calibrations to training length 17

Sensitivity of Object based calibrations to training length 18