Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble Aaron Johnson and Xuguang Wang School of Meteorology and Center for Analysis and Prediction of Storms University of Oklahoma, Norman, OK Acknowledgement: F. Kong, M. Xue, K. Thomas, K. Brewster, Y. Wang, J. Gao Warn-on-Forecast and High Impact Weather Workshop 9 February 2012
Outline Motivation and convection-allowing ensemble overview Non-traditional methods of generating probabilistic forecasts Calibration methods Results –Neighborhood based Full ensemble without calibration Full ensemble with calibration Sub-ensembles without calibration Sub-ensembles with calibration –Object based Full ensemble without calibration Full ensemble with calibration Sub-ensembles without calibration Sub-ensembles with calibration 2
Forecast example Hourly accumulated precipitation Near-CONUS domain Subjective impressions of storm structures 3
Motivation Numerous calibration studies for meso- and global-scale ensembles (e.g., Wang and Bishop 2005, Wilks and Hamill 2007, Sloughter et al. 2007) – How do different probabilistic forecast calibrations compare at convection- allowing resolution? Neighborhood methods relax grid point sensitivity of high resolution forecasts (e.g., Ebert 2009) while object based methods retain storm scale features but are typically applied to deterministic forecasts (e.g., Davis et al. 2006, Gallus 2010). – How skillful are such non-traditional probabilistic forecasts before and after calibration? – How to generate probabilistic forecasts at convection-allowing resolution? 2009 CAPS ensemble forecasts for HWT Spring Experiment clustered according to WRF model dynamics (Johnson et al. 2011) – Is multi-model necessary? Is the conclusion changed before and after calibration? 4
20 members initialized 00 UTC, integrated 30 hours over near-CONUS domain on 26 days from 29 April through 5 June 2009, on 4 km grid without CP. Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH 10 members are from WRF-ARW, 8 members from WRF-NMM, and 2 members from ARPS. Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH Initial background field from 00 UTC NCEP NAM analysis. Coarser (~35 km) resolution IC/LBC perturbations obtained from NCEP SREF forecasts Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH Assimilation of radar reflectivity and velocity using ARPS 3DVAR and cloud analysis for 17 members Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH Perturbations to Microphysics, Planetary Boundary Layer, Shortwave Radiation and Land Surface Model physics schemes. Member ICLBCRMPPBLSW Rad.LSM ARWCNCNNAMfYThompsonMYJGoddardNOAH ARWC0NAMaNAMfNThompsonMYJGoddardNOAH ARWN1CN – em N1em N1YFerrierYSUGoddardNOAH ARWN2CN – nmm N1nmm N1YThompsonMYJDudhiaRUC ARWN3CN – etaKF N1etaKF N1YThompsonYSUDudhiaNOAH ARWN4CN – etaBMJ N1etaBMJ N1YWSM6MYJGoddardNOAH ARWP1CN + em N1em N1YWSM6MYJDudhiaNOAH ARWP2CN + nmm N1nmm N1YWSM6YSUDudhiaNOAH ARWP3CN + etaKF N1etaKF N1YFerrierMYJDudhiaNOAH ARWP4CN + etaBMJ N1etaBMJ N1YThompsonYSUGoddardRUC NMMCNCNNAMfYFerrierMYJGFDLNOAH NMMC0NAMaNAMfNFerrierMYJGFDLNOAH NMMN2CN – nmm N1nmm N1YFerrierYSUDudhiaNOAH NMMN3CN – etaKF N1etaKF N1YWSM6YSUDudhiaNOAH NMMN4CN – etaBMJ N1etaBMJ N1YWSM6MYJDudhiaRUC NMMP1CN + em N1em N1YWSM6MYJGFDLRUC NMMP2CN + nmm N1nmm N1YThompsonYSUGFDLRUC NMMP4CN + etaBMJ N1etaBMJ N1YFerrierYSUDudhiaRUC ARPSCNCNNAMfYLinTKE2-layerNOAH ARPSC0NAMaNAMfNLinTKE2-layerNOAH 5
Object based probabilistic forecasts Event being forecast: object of interest Probability obtained from: percentage of ensemble members in which the forecast object occurs Methods of Generating Probabilistic Forecasts Neighborhood based probabilistic forecasts –Event being forecast: accumulated precipitation exceeding a threshold –Probability obtained from: percentage of grid points within a search radius (48 km) from all members that exceed the threshold Figure 8 from Schwartz et al. (2010) 6
Definition of Objects 7
Calibration Methods Reliability diagram method: forecast probability is replaced with observed frequency during training Schaffer et al. (2011) method: Extension of the reliability diagram method by including more parameters. Logistic Regression: Neighborhood based x 1 = mean of NP 0.25 x 2 = standard deviation of NP 0.25 Object based x 1 = uncalibrated forecast probability x 2 = ln(area) Bias Adjustment of each member: Adjust values so CDF of forecasts matches observations (Hamill and Whitaker 2006) 8
Neighborhood Results: Uncalibrated Full Ensemble Diurnal cycle for most thresholds Skill also depends on threshold/accumulation period 9
Neighborhood Results: Calibrated Full Ensemble Skill improvement limited to the periods of skill minima During skill minima, similar improvements from all calibrations 10
Neighborhood Results: Uncalibrated Sub-Ensembles ARW significantly more skillful than NMM for almost all lead times and thresholds Multi-Model is not significantly more skillful than ARW 11
Neighborhood Results: Calibrated Sub-Ensembles Differences among different sub-enembles are reduced. Multi-Model only shows advantages at hour lead times. 12
Object Based Results: Full Ensemble Uncalibrated: Skill minimum during first 6 hours when members tend to be too similar (i.e., underdispersive) Lower skill than neighborhood based Lower skill for hourly than 6 hourly Calibrated: Bias adjustment is the least effective and LR is the most effective. 13
Object Based Results: Sub-Ensembles Uncalibrated: ARW significantly more skillful than NMM. Multi-model did not show advantage compared to ARW Calibrated: Again, more skillful after calibration and more skillful for longer accumulation period. Like the neighborhood probabilistic forecasts, differences in skill among different sub-ensembles are reduced by calibration. 14
Conclusions Probabilistic precipitation forecasts from a convection allowing ensemble for the 2009 NOAA HWT Spring Experiment were verified and calibrated. Probabilistic forecasts were derived from both the neighborhood method and a new object based method. Various calibrations including reliability based, logistic regression, and individual member bias correction methods were implemented. For both the neighborhood and the object based probabilistic forecasts, calibration significantly improved the skill of the forecasts compared to the non-calibrated forecast during skill minima. For the neighborhood probabilistic forecasts, skill of different calibrations were similar For the object based probabilistic forecast, the LR method is most effective. Sub-ensembles from ARW and NMM are also verified and calibrated for the purpose of guiding optimal ensemble design. –ARW was more skillful than NMM for both neighborhood and object based probabilistic forecasts –The difference in skill was reduced by calibration –Multimodel ensemble of ARW and NMM members only shows advantages compared to single model ensemble after the 24 hour lead time for the neighborhood based forecasts. 15
Probability of occurrence is forecast for control forecast objects, A and B. Other panels are forecasts from the other members. Forecast probability of A is 1/8=12.5% Forecast probability of B is 7/8=87.5% 16 Example of object based method CONTROL FORECAST AB
Sensitivity of Neighborhood based calibrations to training length 17
Sensitivity of Object based calibrations to training length 18