Presentation is loading. Please wait.

Presentation is loading. Please wait.

2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.

Similar presentations


Presentation on theme: "2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC."— Presentation transcript:

1 2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC

2 Verification Rules System must be a tropical (or subtropical) cyclone at both the forecast time and verification time, includes depression stage (except as noted). Verification results are final (until we change something). Special advisories ignored; regular advisories verified. Skill baselines for track is revised CLIPER5 (updated developmental data to 1931-2004 [ATL] and 1949-2004 [EPAC]), run post-storm on operational compute data. Skill baseline for intensity is the new decay-SHIFOR5 model, run post-storm on operational compute data (OCS5). Minimum D-SHIFOR5 forecast is 15 kt. New interpolated version of the GFDL: GHMI. Previous GFDL intensity forecast is lagged 6 h as always, but the offset is not applied at or beyond 30 h. Half the offset is applied at 24 h. Full offset applied at 6-18 h. ICON now uses GHMI.

3 Decay SHIFOR5 Model Begin by running regular SHIFOR5 model. Apply the DeMaria module to adjust intensity of tropical cyclones for decay over land. This includes recent adjustments for less decay over skinny landmasses (estimates fraction of the circulation over land). Algorithm requires a forecast track. For a skill baseline, CLIPER5 track is used. (OFCI could be used if the intent was to provide guidance).

4 2006 Atlantic Verification VT NT TRACK INT (h) (n mi) (kt) ============================ 000 241 9.5 2.1 012 223 29.7 6.5 024 205 50.8 10.0 036 187 71.9 12.4 048 169 97.0 14.3 072 132 148.7 18.1 096 100 205.5 19.6 120 78 265.3 19.0 Values in green meet or exceed all-time records. * 48 h track error for TS and H only was 96.6 n mi.

5 Track Errors by Storm

6

7 2006 vs. 5-Year Mean

8 New 5-Year Mean 55 n mi/day

9 OFCL Error Distributions

10 Errors cut in half since 1990

11 Mixed Bag of Skill

12 2006 Track Guidance (Top Tier)

13 2 nd Tier Early Models

14 2006 Late Models

15 Experimental NASA Model (FV5)

16 Guidance Trends

17 Goerss Corrected Consensus CCON 120 h FSP: 36%CGUN 120 h FSP: 33% Small improvements of 1-3%, but benefit lost by 5 days.

18 FSU Superensemble vs Goerss Corrected Consensus

19 FSU Superensemble vs Other Consensus Models

20 2006 vs 5-Year Mean

21 No progress with intensity

22 Skill sinking faster than dry air over the Atlantic

23 Intensity Guidance

24

25 Dynamical Intensity Guidance Finally Surpasses Statistical Guidance

26 Intensity Error Distribution When there are few rapid-intensifiers, OFCL forecasts have a substantial high bias. GHMI had larger positive biases, but higher skill (i.e., smaller but one-sided errors).

27 FSU Superensemble vs Other Consensus Models

28 2006 East Pacific Verification VT N Trk Int (h) (n mi) (kt) ======================== 000 379 8.8 1.7 012 341 30.2 6.8 024 302 54.5 11.2 036 264 77.4 14.6 048 228 99.7 16.1 072 159 142.3 17.8 096 107 186.1 19.3 120 71 227.5 18.3 Values in green represent all-time lows.

29 2006 vs 5-Year Mean

30 Errors cut by 1/3 since 1990

31 OFCL Error Distributions

32 Skill trend noisy but generally upward

33 2006 Track Guidance (1 st tier) Larger separation between dynamical and consensus models (model errors more random, less systematic).

34 2 nd Tier

35 FSU Superensemble vs Other Consensus Models

36 Relative Power of Multi-model Consensus n e = 1.65 n e = 2.4

37 2006 vs Long-term Mean

38 Same as it ever was…

39 …same as it ever was.

40 2006 Intensity Guidance

41 FSU Superensemble vs Other Consensus Models

42 Summary Atlantic Basin - Track OFCL track errors set records for accuracy from 12- 72 h. Mid-range skill appears to be trending upwards. OFCL track errors set records for accuracy from 12- 72 h. Mid-range skill appears to be trending upwards. OFCL track forecasts were better than all the dynamical guidance models, but trailed the consensus models slightly. OFCL track forecasts were better than all the dynamical guidance models, but trailed the consensus models slightly. GFDL, GFS, and NOGAPS provided best dynamical track guidance at various times. UKMET trailed badly. No (early) dynamical model had skill at 5 days! GFDL, GFS, and NOGAPS provided best dynamical track guidance at various times. UKMET trailed badly. No (early) dynamical model had skill at 5 days! ECMWF performed extremely well, when it was available, especially at longer times. Small improvement in arrival time would result in many more EMXI forecasts. ECMWF performed extremely well, when it was available, especially at longer times. Small improvement in arrival time would result in many more EMXI forecasts. FSU super-ensemble not as good as Goerss corrected consensus, and no better than GUNA in a three-year sample. FSU super-ensemble not as good as Goerss corrected consensus, and no better than GUNA in a three-year sample.

43 Summary (2) Atlantic Basin - Intensity OFCL intensity errors were very close to the long- term mean, but skill levels dropped very sharply (i.e., even though Decay-SHIFOR errors were very low, OFCL errors did not decrease). The OFCL errors also trailed the GFDL and ICON guidance. OFCL intensity errors were very close to the long- term mean, but skill levels dropped very sharply (i.e., even though Decay-SHIFOR errors were very low, OFCL errors did not decrease). The OFCL errors also trailed the GFDL and ICON guidance. For the first time, dynamical intensity guidance beat statistical guidance. For the first time, dynamical intensity guidance beat statistical guidance. OFCL forecasts had a substantial high bias. Even though the GFDL had smaller errors than OFCL, its bias was larger. OFCL forecasts had a substantial high bias. Even though the GFDL had smaller errors than OFCL, its bias was larger. FSU super-ensemble no better than a simple average of GFDL and DSHP (three-year sample). FSU super-ensemble no better than a simple average of GFDL and DSHP (three-year sample).

44 Summary (3) East Pacific Basin - Track OFCL track errors up, skill down in 2006, although errors were slightly better than the long-term mean. OFCL track errors up, skill down in 2006, although errors were slightly better than the long-term mean. OFCL beat dynamical models, but not the consensus models. Much larger difference between dynamical models and the consensus in the EPAC (same as 2005). OFCL beat dynamical models, but not the consensus models. Much larger difference between dynamical models and the consensus in the EPAC (same as 2005). FSU super-ensemble no better than GUNA (two-year sample). FSU super-ensemble no better than GUNA (two-year sample).

45 Summary (4) East Pacific Basin - Intensity OFCL intensity errors/skill show little improvement. OFCL intensity errors/skill show little improvement. GFDL beat DSHP after 36 h, but ICON generally beat both. GFDL beat DSHP after 36 h, but ICON generally beat both. FSU super-ensemble slightly better than ICON at 24-48 h, but worse than ICON after that. FSU super-ensemble slightly better than ICON at 24-48 h, but worse than ICON after that.


Download ppt "2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC."

Similar presentations


Ads by Google