Presentation is loading. Please wait.

Presentation is loading. Please wait.

Similar Day Ensemble Post-Processing as Applied to Wildfire Threat and Ozone Days Michael Erickson 1, Brian A. Colle 1 and Joseph J. Charney 2 1 School.

Similar presentations


Presentation on theme: "Similar Day Ensemble Post-Processing as Applied to Wildfire Threat and Ozone Days Michael Erickson 1, Brian A. Colle 1 and Joseph J. Charney 2 1 School."— Presentation transcript:

1 Similar Day Ensemble Post-Processing as Applied to Wildfire Threat and Ozone Days Michael Erickson 1, Brian A. Colle 1 and Joseph J. Charney 2 1 School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook, NY 2 USDA Forest Service, East Lansing, MI

2 NCEP GEFS 546 dm Contour CMC GGEM 546 dm Contour Ensemble Forecasting: Many Ensembles To Choose From Example from 00 UTC 10/24/2010 – 500 hPa 144 Hr Forecast …And there are many other ensembles out there (i.e. ECMWF, NOGAPS, NCEP SREF, UKMET to name a few).

3 850 hPa Temperature Anomaly NCEP SREF Mean SLP and Spread Sea Level Pressure 1012 and 1028 hPa Contour Ensemble Forecasting: A Few Ways to Look at Data Example from 00 UTC 10/24/2010 – GEFS 144 Hr Forecast Probability of Precipitation Sea Level Pressure Mean and Spread Source: http://www.emc.ncep.noaa.gov/gmb/ens/ With so much data out there, can ensemble post-processing be used to benefit the operational forecaster by creating deterministic and probabilistic gridded forecasts for certain types of weather patterns?

4 SREF+SBU 2-m Temperature Bias > 298 K Between 1200-0000 UTC Diurnal Mean Error – SREF+SBUBias by Member > 298 K Caveats Using ensemble output directly may be misleading, since ensembles are frequently underdispersed with large surface model biases. Although ensemble post-processing methods are growing in sophistication, the sensitivity of biases to the synoptic flow pattern is not well known.

5 Do model biases vary with the ambient surface weather conditions (i.e. on days with high fire threat and high ozone)? Does applying a similar day approach to ensemble post- processing improve deterministic and probabilistic forecasts? Are there any dominant atmospheric flow patterns during these anomalous events that are related to model biases? Can similar day ensemble post-processing be used to create simple, helpful and skillful forecasts in operations? Questions to Be Addressed

6 -Consists of 7 MM5 and 6 WRF members run at 12 km grid spacing within a larger 36 km nest. -Variety of ICs (GFS, NAM, NOGAPS, CMC), microphysical, convective and PBL schemes. -Analyzed the Stony Brook University (SBU) and NCEP Short Range Ensemble Forecast (SREF) system for 2-m temperature and 10-m wind speed. -The Automated Surface Observing System (ASOS) are used as verifying observations from 2007-2009 over a subset of the Northeast. Verification Domain Methods and Data Region of Study 00 UTC SBU 13 Member Ensemble -10 ETA, 5 RSM, 3 WRF-NMM, and 3 WRF-ARW. -IC’s perturbed using a breeding technique. 21 UTC NCEP SREF 21 Member Ensemble Region of Study

7 Bias Correction Methods 1.Running Mean Bias Correction: Determine bias over training period and subtract it from forecast (Wilson et al. 2007): 1.CDF Bias Correction: Adjust model CDF to the observed CDF for all forecast values (Hamill and Whitaker 2005), then elevation and landuse. Wind speed was bias corrected with the CDF method, temperature used the running mean method. CDF Bias Correction Example CDF For Model and Observation

8 Explored the impact of training period on post-processing for high fire threat and ozone days: 1.Sequential Training – Used the most recent 14 consecutive days. 2.Conditional Training – Used the most recent 14 similar days. Analyzed daytime model output (1200-0000 UTC) for ensembles initialized the day of and the day before the hazardous weather event (i.e. SBU model hours 12-24 and 36-48). Used the Fire Potential Index (FPI) from the Woodland Fire Assessment System (WFAS) between 2007-2009. A fire threat day must have 10% or greater of the domain reach 50 FPI while the remainder of the domain exceeds 25 FPI. Exploring Model Bias on Hazardous Weather Days High Fire Threat Classification A high ozone day must have 10% of AIRNow stations with an Air Quality Index (AQI) > 60 ppb, while the remainder of the domain > 30 ppb. High Ozone Threat Classification Additional Details

9 Bias Correction Methods for Fire Threat Days - Temperature Sequential bias correction with temperature still has an average ensemble mean bias of -0.8 K, which is removed when using conditional bias correction. MAE is also improved for almost every ensemble member and the ensemble mean. ME per Model For Temperature MAE per Model For Temperature

10 Bias Correction Methods for Fire Threat Days – Temperature > 298 K Spatially, the negative temperature bias on high fire threat days is found in every station. Conditional bias correction removes the negative temperature bias and reduces the spread of the biases. Raw Warm Season Raw Fire Threat DaysRaw Warm Season Seq. Bias Cor. Fire Threat DaysCond. Bias Cor. Fire Threat Days

11 Bias Correction Methods for Fire Threat Days – Wind Speed High fire threat days have a smaller positive wind speed bias than the warm season average. As with temperature, conditional bias correction removes the bias and improves MAE. ME per Model For Wind Speed MAE per Model For Wind Speed

12 Bias Correction Methods for High Ozone Days – Wind Speed > 2.6 m/s Wind speed model biases on high ozone days are also less positively biased than the warm season average. Spatially, the CDF bias correction removes most of the bias, although not as effectively as the additive bias correction with temperature. Raw High Ozone DaysRaw Warm Season Seq. Bias Cor. High Ozone DaysCond. Bias Cor. High Ozone Days

13 Reliability Plots for Conditional Bias Correction High Ozone Days – Wind Speed Even after conditional bias correction, reliability plots still reveal a lack of probabilistic skill, indicative of a strong amount of underdispersion. Therefore, additional post-processing is necessary.

14 Bayesian Model Averaging (BMA) Bayesian Model Averaging (BMA, Raftery et al. 2005) calibrates ensemble forecasts by estimating: Weights for each ensemble member. The uncertainty associated with each forecast. 10 members were selected from the SBU/SREF system with a training period of 28 days. 5 best SBU members (in terms of MAE) for each PBL scheme. 5 control SREF members. The same model hours and training method used in bias correction is also used for BMA. Parameters estimated using a MCMC method developed by Vrugt et al. (2008). The BMA derived distribution Members Have Varying Weights Sample BMA PDF For Wind Speed

15 Reliability Plots for Conditional Bias Correction and BMA High Ozone Days – Wind Speed BMA improves probabilistic results (i.e. reliability corresponds closer to the 1:1 line) for high ozone wind speed results. However, it is important to evaluate ensemble dispersion on the average.

16 Rank Histograms of Temperature for Hazardous Weather Days BMA greatly improves ensemble underdipsersion, but can not correct any lingering bias as a result of using sequential training. Sequential Training – Fire Threat Days Conditional Training – Fire Threat Days Conditional Training – High Ozone DaysSequential Training – High Ozone Days

17 Brier Skill Scores - Conditional and Sequential BMA Referenced Against Sequential Bias Correction – Temperature Brier Skill Scores (BSS) indicate probabilistic benefit with values greater than zero. Since BSS is usually > 0, BMA improves the ensemble regardless of training. In many cases, conditional BMA performs better than sequential BMA, with statistically significant improvement on high fire threat days. Fire Threat Days - Temperature High Ozone Days - Temperature

18 Brier Skill Scores - Conditional and Sequential BMA Referenced Against Sequential Bias Correction – Wind Speed Unlike temperature, the difference between conditional and sequential BMA is not statistically significant. Fire Threat Days – Wind Speed High Ozone Days – Wind Speed

19 BMA in Operational Forecasting – High Fire Threat Example Ensemble Mean Forecast on April 25 th 2008 BMA can be used to generate a deterministic forecast spatially over the entire region. Raw Mean Ensemble ForecastSequential Bias Corrected Mean Forecast Conditional BMA Mean Forecast Observation Conditional Bias Corrected Mean Forecast

20 BMA in Operational Forecasting – High Fire Threat Example Probability > 287 K on April 25 th 2008 BMA can be used to generate probabilistic forecasts for critical thresholds that are typically more accurate than bias correction or the raw ensemble. Raw Ensemble ForecastConditional Bias Corrected ForecastSequential Bias Corrected Forecast Conditional BMA Forecast Observation

21 Sensitivity of Ensemble Performance to Member Selection A key assumption when running BMA was that the 5 SBU and 5 SREF members selected was a good choice. This assumption is tested by comparing the 10 SBU/SREF ensemble used previously (B10) to a randomly selected 10 member ensemble (R10) 1000 times. Warm Season – Conditional Bias Correction; No BMA Hazardous Weather Days – Conditional Bias Correction and BMA The sensitivity of BMA performance to member selection is tested by rerunning BMA with 2 ensembles consisting of 10 members each: 1. The same 5 SBU and 5 SREF member ensemble used earlier (B10-BMA). 2. 5 randomly selected SBU and 5 randomly selected SREF members (R10-BMA). The benefits of combining the SBU + SREF is tested by creating 3 ensembles from B10. 1. 5 best SBU members (B5-SBU-BMA). 2. 5 best SREF members (B5-SREF-BMA). 3. 2.5 randomly selected best SBU and 2.5 randomly selected best SREF (B5- ALL-BMA).

22 Sensitivity of Ensemble Performance to Member Selection – Warm Season – Bias Correction (No BMA) The B10 ensemble has lower MAE (i.e. more skill) than R10, but it also has less ensemble spread (i.e. is more underdispersed) for temperature and wind speed. This underdispersion results in B10 having less probabilistic skill than R10 with wind speed Spread: B10 Minus R10 Ensemble BSS: B10 Referenced Against R10 MAE: B10 Minus R10 Ensemble Temperature Wind Speed

23 Brier Skill Scores – Comparison Between Best Ensemble and Randomly Generated Ensembles- Temperature B10-BMA performs better probabilistically than R10-BMA. Since this was not the case with B10 and R10, BMA can correct for underdispersion if the ensemble has deterministic skill. Fire Threat Days High Ozone Days

24 Brier Skill Scores – Benefits of Adding the SBU and SREF Ensembles - Temperature The combined SBU and SREF ensemble (5B-ALL-BMA) performs better than the SBU (5B-SBU-BMA) and SREF (5B-SREF-BMA) separately on high fire threat days. Results with high ozone days are mixed, with no clear benefit or loss with using 5B-ALL-BMA. Fire Threat Days High Ozone Days

25 Environmental Modes on High Fire Days – 500 hPa Height Anomaly High fire threat days may be associated with a few consistent large- scale atmospheric flow regimes. In order to examine this, the North American Regional Reanalysis (NARR, 32 km grid spacing) 3-hour composites were gathered on high fire threat days. Atmospheric modes on high fire threat days were captured using Empirical Orthogonal Functions (EOF) analysis on 500 hPa height anomalies, and the dominant Principal Components (PCs) were correlated to the 2-m temperature model bias.

26 Environmental Modes on High Fire Days – 500 hPa Height Anomaly 500 hPa Anomaly EOF 1 - 4

27 Conclusions ● High fire threat and ozone days have cooler 2-m temperature and less windy 10-m wind speed model biases compared to the warm season average. ● Conditional post-processing is better than sequential training at removing biases and calibrating the ensemble for high fire threat and ozone days. ● Results with a similar day approach suggests that analog post-processing could be used to create unbiased and skillful gridded operational forecasts. ● Similar day post-processing could be extended to benefit forecasts of high impact events (Nor‘easters, cold snaps, heat waves, wind energy, etc). ● However, it is not immediately obvious how to implement a large ensemble of unique members with BMA. Combining the SBU and SREF ensembles and picking skillful members (in terms of MAE) is frequently better than randomly selecting members. ● Preliminary EOF results suggest the presence of 500 hPa flow regimes on high fire threat days that are correlated to the 2-m temperature model biases. Exploring the cause for these biases could lead to an improvement in the model parameterized physics.


Download ppt "Similar Day Ensemble Post-Processing as Applied to Wildfire Threat and Ozone Days Michael Erickson 1, Brian A. Colle 1 and Joseph J. Charney 2 1 School."

Similar presentations


Ads by Google