Presentation is loading. Please wait.

Presentation is loading. Please wait.

On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.

Similar presentations


Presentation on theme: "On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ."— Presentation transcript:

1 On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ. of Oklahoma January 14, 2004 Where Americas Climate and Weather Services Begin

2 2003 SPC/NSSL Spring Program: Objectives: – Advance the science of weather forecasting and the prediction of severe convective weather – Facilitate discussion and excite collaboration between researchers and forecasters through real-time forecasting and evaluation – Bring in subject matter experts for assistance – Efficient testing and delivery of results to SPC operations Emphasis: – Model predicted convective initiation (< 15 hrs) [40%] – Explore SREF systems' ability to aid severe convective weather forecasting via Day 2 Probability Outlook [60%]

3 Model (or Best Member) of the Day Can a “best member” be chosen from the ensemble? Can some members be eliminated from further consideration once they have deviated too far from reality? Is a “return to skill” possible for eliminated members? Do the early “best” members continue to verify as best during the remainder of the period?

4 “Return to Skill” in Lorenz '63 Model Trajectories return to nearly the same point, but have taken different paths through phase space (The forecast is “right” for the wrong reason) Difference varies by time and variable (and space as seen later)

5 Unique Best members in Lorenz '63 Model In perfect model, nearly every ensemble member has been considered “best” by the time ensemble skill saturates relative to climatology. In a biased model, ensemble skill saturates more quickly, but the growth of unique best members is a bit slower. Average scores for 1000 60-member ensembles

6 15 members 5 Eta; 5 EtaKF; 5 RSM 1 Control; 2 +/- Bred initial perturbations 63 hour forecast starting at 09 UTC and 21 UTC 48 km grid spacing NCEP SREF used in Spring Program

7 Spatial Variability of “Best” Members After ranking ensemble members, the median, maximum, and minimum also show highly mixed contributions

8 Best Member Statistics, or Loss of Member Skill Following Best Member ideas of Roulston and Smith (2003) – Attempted to: find a “true best ensemble” member at all forecast hours, and correlate F015 ensemble ranking to F039 ensemble ranking – Normalized RMSE based on 22 variables PMSL, PWTR, CAPE 2 meter: T, Td 10 meter: U, V 700, 500, 300 hPa: T, r, U, V, Z – RUC analyses served as “truth” at 0000 and 1200 UTC – 24 days of August 2003

9 The ensemble mean is nearly always closest to the analyses Without ensemble mean, ~3 members are considered best among the 6 that could have been identified during the forecast

10 r =.28 Loss of Skill Can 15 hr verification help predict 39 hr results?

11 12-hr rank correlation gradually increases with lead time Inclusion of the ensemble mean always improves the result Excludes Mean Includes Mean

12 Rank correlation decreases with increasing lead time A particular member should not be isolated as a preferred deterministic forecast

13 Summary Skill is not monotonic throughout the forecast. Performance measures vary widely by parameter and through space and time. The ensemble mean is usually the “best member”. Attempts to isolate a single best ensemble member will not yield the best forecast over time. Eliminating poorly-performing ensemble members early in the forecast degrades its collective future value.

14


Download ppt "On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ."

Similar presentations


Ads by Google