Presentation is loading. Please wait.

Presentation is loading. Please wait.

Toward an Effective Short-Range Ensemble Forecast System October 17, 2003 Maj Tony Eckel, USAF University of Washington Atmospheric Sciences Department.

Similar presentations


Presentation on theme: "Toward an Effective Short-Range Ensemble Forecast System October 17, 2003 Maj Tony Eckel, USAF University of Washington Atmospheric Sciences Department."— Presentation transcript:

1 Toward an Effective Short-Range Ensemble Forecast System October 17, 2003 Maj Tony Eckel, USAF University of Washington Atmospheric Sciences Department This research was supported by the DoD Multidisciplinary University Research Initiative (MURI) program administered by the Office of Naval Research under Grant N00014-01-10745. or… Everything You Ever Wanted to Know About SREF but Were Afraid to Ask

2 FP = 93% Event Threshold FP = ORF = 72% Frequency Initial State Forecast Probability from an Ensemble EF provides an estimate (histogram) of truth’s Probability Density Function (PDF). In a large, well­tuned EF, Forecast Probability (FP) = Observed Relative Frequency (ORF) 24hr Forecast State48hr Forecast State Frequency In practice, things go awry from… Undersampling of the PDF (too few ensemble members) Poor representation of initial uncertainty Model deficiencies -- Model bias causes a shift in the estimated mean -- Sharing of model errors between EF members leads to reduced variance EF’s estimated PDF does not match truth’s PDF, and FP  ORF True PDF P(precip > 1.0 inch in 6hr) 0 0 0 0 0 0 0 0 0 0 0 0 VALID: 11 AM, 17 OCT 03

3 Multimodel Vs. Perturbed­Model PME Vs. ACME core+ 36-km Domain

4 Ensemble Dispersion *PME *ACME core+ Probability Verification Rank Histogram Record of where verification fell (i.e., its rank) among the ordered ensemble members: Flat Well calibrated EF (truth’s PDF matches EF PDF) U’d Under-dispersive EF (truth “gets away” quite often) Humped Over-dispersive EF *Bias-corrected MSLP @ 36 h

5 Skill for P(MSLP < 1001 mb) M : number of probability bins N : number of data pairs in the bin FP i : binned forecast probability {0.0, 0.1,…1.0} ORF i : observed relative frequency for bin i SC : sample climatology (total occ. / total fcsts.)

6 Standardized Verification Verification Outlier Percentage % of V Z > 3 Probability *ACME core 36 h MSLP Verification Rank Histogram MR = 36.6% MRE = 14.4% VOP = 9.0% VOP = 3.3% *ACME core+ 12 h Forecast Z 500 EF Mean & V z *ACME core+ 12 h Forecast Z 500 EF Mean & Verification Extreme Ranks V z <  3 V z > 3 rank 1 rank 9 MR = 27.9% MRE = 5.7% -6 How/Where Does Truth Get Away?

7 VOP = 13.8% VOP = 8.0% *ACME core+ *PME Z 500 Cent Analysis 12Z, 21 Dec 2002 Case Study Initialized at 00Z, 20 Dec 2002 36 h Forecast Z 500 EF Mean and Standardized Verification, V z V z <  3 V z > 3

8 Value of Model Diversity For a Mesoscale SREF ACME core Vs. ACME core+ 12-km Domain

9 (d) T 2 (c) WS 10 (b) MSLP (a) Z 500 *ACME core *ACME core+ *ACME core *ACME core+ *ACME core *ACME core+ *PME *ACME core *ACME core+ *PME VOP 1.8 % 5.0 % 4.2 % 1.6 % 9.0 % 6.7 % 25.6 % 13.3 % 43.7 % 21.0 % Surface/Mesoscale Parameter (Errors Depend on Model Uncertainty) Synoptic Parameter (Errors Depend on IC Uncertainty)

10 (m 2 /s 2 ) (C2)(C2) WS 10 T2T2 *ACME core Spread MSE of *ACME core Mean *ACME core+ Spread MSE of *ACME core+ Mean  c 2  11.4 m 2 / s 2  c 2  13.2 ºC 2 Ensemble Dispersion and Statistical Consistency *Bias Corrected ACME core Spread (Uncorrected) ACME core+ Spread (Uncorrected)

11 (m 2 /s 2 ) WS 10 Scale Dependence of Ensemble Dispersion 12-km ACME core Spread (Uncorrected) 36-km ACME core Spread (Uncorrected)

12 *ACME core ACME core *ACME core+ ACME core+ Uncertainty Skill for P(T 2 < 0°C)

13 Success and Failure of ACME ACME core Vs. ACME 36-km Domain 12-km Domain

14 Average RMSE (mb) Average Ranking

15

16 36-h WS 10 MR = 52.5% MRE = 30.3% VOP = 25.9% MR = 44.0% MRE = 32.9% VOP = 24.7% *ACME core *ACME MR = 68.1% MRE = 45.9% VOP = 44.4% MR = 62.3% MRE = 51.2% VOP = 44.2% *ACME core *ACME 0.4 0.3 0.2 0.1 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Verification Rank 0.3 0.2 0.1 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 36-h T 2 MR = 36.5% MRE = 14.2% VOP = 9.1% 36-h MSLP MR = 26.1% MRE = 15.0% VOP = 8.7% *ACME core *ACME 0.3 0.2 0.1 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Verification Rank Histograms for…

17 P(WS 10 > 18 kt) *ACME core ACME core *ACME ACME P(MSLP < 1001 mb) Lead Time (h)

18 rank 9 MR = 47.9% MRE = 25.7% VOP = 18.5% *ACME core Case Study Initialized at 00Z, 20 Dec 2002 36 h Forecast V z <  3 V z > 3 Z 500 EF Mean and V z rank 1 Z 500 EF Mean and Verification Extreme Ranks

19 VOP = 19.5% *ACME Case Study Initialized at 00Z, 20 Dec 2002 36 h Forecast V z <  3 V z > 3 Z 500 EF Mean and V z Z 500 EF Mean and Verification Extreme Ranks rank 18 rank 1 MR = 40.7% MRE = 29.6%

20 *PME Case Study Initialized at 00Z, 20 Dec 2002 36 h Forecast V z <  3 V z > 3 VOP = 8.0% Z 500 EF Mean and V z Z 500 EF Mean and Verification Extreme Ranks rank 9 rank 1 MR = 37.3% MRE = 15.1%

21 Conclusions 1) Bias correction a) Particularly important for mesoscale SREF in which model biases are often large b) Reliability is improved by correctly shifting the forecast PDF location c) Resolution is improved through increased sharpness of the forecast PDF d) Allows for fair and accurate analysis 2) Multianalysis approach a) Useful representation of analysis uncertainty that results in a skilled SREF b) Analyses too highly correlated at times—miss key features c) Mirroring does produce valid samples of the approximate forecast PDF but can not correct deficiencies in original sample

22 3) Including model diversity in SREF is critical for representation of uncertainty a) Increases the spread of an underdispersive EF to improve statistical consistency b) Degree of importance depends on parameter, scale, and weather regime c) Unequal skill among ensemble members is not problematic d) Both reliability and resolution are improved (even with increased spread) through better estimation of the forecast PDF e) A PME (multimodel) system is great at capturing large-scale model uncertainty -- Does a more thorough job than a perturbed-model system -- Slight overdispersion in a small EF may be advantageous ** Improve mesoscale SREF by nudging MM5 solutions to PME solutions f) Small-scale spread depends on model resolution ** Improve mesoscale SREF by increasing model resolution g) Lots of room for improvement within MM5 Conclusions

23

24 UW’s Ensemble of Ensembles # of EF Initial Forecast Forecast Name Members Type Conditions Model(s) Cycle Domain ACME 17SMMA 8 Ind. Analyses, “Standard” 00Z 36km, 12km 1 Centroid, MM5 8 Mirrors ACME core 8SMMA Independent “Standard” 00Z 36km, 12km Analyses MM5 ACME core+ 8PMMA “ “ 8 MM5 00Z 36km, 12km variations PME 8 MMMA “ “ 8 “native” 00Z, 12Z 36km large-scale Homegrown Imported ACME: Analysis-Centroid Mirroring Ensemble PME: Poor Man’s Ensemble MM5: PSU/NCAR Mesoscale Modeling System Version 5 SMMA: Single Model Multi-Analysis PMMA: Perturbed-model Multi-Analysis MMMA: Multi-model Multi-Analysis

25 Resolution ( ~ @ 45  N ) Objective Abbreviation/Model/Source Type Computational Distributed Analysis avn, Global Forecast System (GFS), SpectralT254 / L641.0  / L14 SSI National Centers for Environmental Prediction~55 km~80 km3D Var cmcg, Global Environmental Multi-scale (GEM), Finite0.9  0.9  /L281.25  / L113D Var Canadian Meteorological Centre Diff ~70 km ~100 km eta, limited-area mesoscale model, Finite32 km / L45 90 km / L37SSI National Centers for Environmental Prediction Diff.3D Var gasp, Global AnalysiS and Prediction model,SpectralT239 / L291.0  / L11 3D Var Australian Bureau of Meteorology~60 km~80 km jma, Global Spectral Model (GSM),SpectralT106 / L211.25  / L13OI Japan Meteorological Agency~135 km~100 km ngps, Navy Operational Global Atmos. Pred. System,SpectralT239 / L301.0  / L14OI Fleet Numerical Meteorological & Oceanographic Cntr. ~60 km~80 km tcwb, Global Forecast System,SpectralT79 / L181.0  / L11 OI Taiwan Central Weather Bureau~180 km~80 km ukmo, Unified Model, Finite5/6  5/9  /L30same / L123D Var United Kingdom Meteorological Office Diff.~60 km “Native” Models/Analyses of the PME

26 Design of ACME core+ 8  5  2  2  5  3  2  2  2  8  8 = 1,228,800 Total possible combinations:

27  Total of 129, 48-h forecasts (Oct 31, 2002 – Mar 28, 2003) all initialized at 00z - Missing forecast case days are shaded  Parameters: - 36 km Domain: Mean Sea Level Pressure (MSLP), 500mb Geopotential Height (Z 500 ) - 12 km Domain: Wind Speed @ 10m (WS 10 ), Temperature at 2m (T 2 ) Research Dataset 36 km Domain (151  127) 12 km Domain (101  103)  Verification: - 36 km Domain: centroid analysis (mean of 8 independent analyses, available at 12h increments) - 12 km Domain: ruc20 analysis (NCEP 20 km mesoscale analysis, available at 3h increments) NovemberDecemberJanuary February March

28 Probabilistic Forecast Verification (reliability) (resolution) (uncertainty) Brier Score Brier Skill Score n: number of data pairs FP i : forecast probability {0.0…1.0} OBS i : observation {0.0 = yes, 1.0 = no} M : number of probability bins (normally 11) N : number of data pairs in the bin FP * i : binned forecast probability (0.0, 0.1,…1.0 for 11 bins) ORF * i : observed relative frequency for bin i SC : sample climatology (total occurrences / total forecasts) Decomposed Brier Score Skill Score Continuous by Discrete Bins BS = 1 for perfect forecasts BS = 0 for worst case forecasts BSS = 1 for perfect forecasts BSS < 0 for forecasts worse than climo ADVANTAGES: 1) No need to know long-term climatology of the parameter 2) Can compute SS and visualize BS in a reliability diagram

29 P(MSLP < 1001mb) @ 36 h Reliability Diagram Comparison PME ACME core+ Sample Climatology BSS

30 How Well Does an EF Capture Truth?  Missing Rate (MR) – The percentage of verifications not “encompassed” by the ensemble members or Missing Rate Error (MRE): N 1 : Number of verifications in rank #1 of a VRH N n+1 : Number of verifications in rank #n+1 where n is the number of members M : Number of verifications : Ensemble mean V : Verification value s : Ensemble standard deviation  Standardized Verification (V Z ) – The deviation of the verification with respect to the standard deviation of the ensemble members VZ = 1VZ = 1VZ = 3VZ = 3 VZ = 3VZ = 3 VZ = 5VZ = 5  Verification Outlier Percentage (VOP ) – The percentage of verifications not “portrayed” by the ensemble members (i.e., how often verification is an outlier) : Ensemble mean at point m V m : Verification value at point m s m : Ensemble standard deviation at point m E{VOP}  0.3% for large n and normal PDF


Download ppt "Toward an Effective Short-Range Ensemble Forecast System October 17, 2003 Maj Tony Eckel, USAF University of Washington Atmospheric Sciences Department."

Similar presentations


Ads by Google