2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.

Slides:



Advertisements
Similar presentations
SIX INTERNATIONAL WORKSHOP ON TROPICAL CYCLONES 2006 IWTC-VI SAN JOSE, COSTA RICA NOVEMBER 2006 TOPIC 0.1 QUANTITATIVE FORECASTS OF TROPICAL CYCLONES LANDFALL.
Advertisements

Improvements in Statistical Tropical Cyclone Forecast Models: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria 1, Andrea Schumacher 2, John.
Andrea Schumacher, CIRA/CSU Mark DeMaria, NOAA/NESDIS/StAR Dan Brown and Ed Rappaport, NHC.
Introduction to Hurricane Forecasting John P. Cangialosi Hurricane Specialist National Hurricane Center HSS Webinar 13 March 2012 John P. Cangialosi Hurricane.
National Hurricane Center 2008 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialists Unit National Hurricane Center 2009 Interdepartmental.
2013 North Atlantic hurricane seasonal forecast Ray Bell with thanks to Joanne Camp (met office)
Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009 Geophysical Fluid Dynamics Laboratory Review June 30 - July 2, 2009.
Further Development of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1 Mark DeMaria 2 Brian D. McNoldy 3 Yi Jin 4 Michael.
Creation of a Statistical Ensemble for Tropical Cyclone Intensity Prediction Kate D. Musgrave 1, Brian D. McNoldy 1,3, and Mark DeMaria 2 1 CIRA/CSU, Fort.
Applications of Ensemble Tropical Cyclone Products to National Hurricane Center Forecasts and Warnings Mark DeMaria, NOAA/NESDIS/STAR, Ft. Collins, CO.
Demonstration Testbed for the Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement.
HFIP Ensemble Subgroup Mark DeMaria Oct 3, 2011 Conference Call 1.
4-6 May 2009 Co-hosted by Naomi Surgi, Mark DeMaria, Richard Pasch, Frank Marks 4-6 May 2009 Co-hosted by Naomi Surgi, Mark DeMaria, Richard Pasch, Frank.
Seasonal Hurricane Forecasting and What’s New at NHC for 2009 Eric Blake Hurricane Specialist National Hurricane Center 4/2/2009 Eric Blake Hurricane Specialist.
Advanced Applications of the Monte Carlo Wind Probability Model: A Year 1 Joint Hurricane Testbed Project Update Mark DeMaria 1, Stan Kidder 2, Robert.
Advanced Applications of the Monte Carlo Wind Probability Model: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria 1, Robert DeMaria 2, Andrea.
ATMS 373C.C. Hennon, UNC Asheville Tropical Cyclone Forecasting Where is it going and how strong will it be when it gets there.
A. Schumacher, CIRA/Colorado State University NHC Points of Contact: M. DeMaria, D. Brown, M. Brennan, R. Berg, C. Ogden, C. Mattocks, and C. Landsea Joint.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
The Impact of Satellite Data on Real Time Statistical Tropical Cyclone Intensity Forecasts Joint Hurricane Testbed Project Mark DeMaria, NOAA/NESDIS/ORA,
Improvements in Deterministic and Probabilistic Tropical Cyclone Surface Wind Predictions Joint Hurricane Testbed Project Status Report Mark DeMaria NOAA/NESDIS/ORA,
Are Atlantic basin tropical cyclone intensity forecasts improving? Jonathan R. Moskaitis 67 th IHC / 2013 Tropical Cyclone Research Forum Naval Research.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
An Improved Wind Probability Program: A Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
ATCF Requirements, Intensity Consensus Sea Heights Consistent with NHC Forecasts (Progress Report) Presenter Buck Sampson (NRL Monterey) Investigators.
A. FY12-13 GIMPAP Project Proposal Title Page version 25 October 2011 Title: Combining Probabilistic and Deterministic Statistical Tropical Cyclone Intensity.
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
Upgrades to the GFDL/GFDN Operational Hurricane Models Planned for 2015 (A JHT Funded Project) Morris A. Bender, Matthew Morin, and Timothy Marchok (GFDL/NOAA)
Caribbean Disaster Mitigation Project Caribbean Institute for Meteorology and Hydrology Tropical Cyclones Characteristics and Forecasting Horace H. P.
Improvements to the SHIPS Rapid Intensification Index: A Year-2 JHT Project Update This NOAA JHT project is being funded by the USWRP in NOAA/OAR’s Office.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
Statistical Hurricane Intensity Prediction Scheme with Microwave Imagery (SHIPS-MI): Results from 2006 Daniel J. Cecil University of Alabama/Huntsville.
NHC/JHT Products in ATCF Buck Sampson (NRL, Monterey) and Ann Schrader (SAIC, Monterey) IHC 2007 Other Contributors: Chris Sisko, James Franklin, James.
Tracking and Forecasting Hurricanes By John Metz Warning Coordination Meteorologist NWS Corpus Christi, Texas.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
Development of Probabilistic Forecast Guidance at CIRA Andrea Schumacher (CIRA) Mark DeMaria and John Knaff (NOAA/NESDIS/ORA) Workshop on AWIPS Tools for.
Web-ATCF, User Requirements and Intensity Consensus Presenter Buck Sampson (NRL Monterey) Funded Participants Ann Schrader (SAIC, Monterey) Providence.
A Global Tropical Cyclone Formation Probability Product Andrea Schumacher, CIRA/CSU Mark DeMaria and John Knaff, NOAA/NESDIS/StAR Daniel Brown, NHC 64.
Evaluation of Experimental Models for Tropical Cyclone Forecasting in Support of the NOAA Hurricane Forecast Improvement Project (HFIP) Barbara G. Brown,
Item 12-20: 2013 Project Update on Expressions of Uncertainty Initiative.
A JHT FUNDED PROJECT GFDL PERFORMANCE AND TRANSITION TO HWRF Morris Bender, Timothy Marchok (GFDL) Isaac Ginis, Biju Thomas (URI)
The Impact of Lightning Density Input on Tropical Cyclone Rapid Intensity Change Forecasts Mark DeMaria, John Knaff and Debra Molenar, NOAA/NESDIS, Fort.
National Hurricane Center 2010 Forecast Verification James L. Franklin and John Cangialosi Hurricane Specialist Unit National Hurricane Center 2011 Interdepartmental.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
National Weather Service What’s New With The NHC Guidance Models and Products The views expressed herein are those of the author and do not necessarily.
Improvements to Statistical Forecast Models and the Satellite Proving Ground for 2013 Mark DeMaria, John Knaff, NOAA/NESDIS/STAR John Kaplan, Jason Dunion,
An Updated Baseline for Track Forecast Skill Through Five Days for the Atlantic and Northeastern and Northwestern Pacific Basins Sim Aberson NOAA/AOML/Hurricane.
Tropical Cyclone Rapid Intensity Change Forecasting Using Lightning Data during the 2010 GOES-R Proving Ground at the National Hurricane Center Mark DeMaria.
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
Impact of New Global Models and Ensemble Prediction Systems on Consensus TC Track Forecasts James S. Goerss NRL Monterey March 3, 2010.
2015 Production Suite Review: Report from NHC 2015 Production Suite Review: Report from NHC Eric S. Blake, Richard J. Pasch, Andrew Penny NCEP Production.
Enhancement of SHIPS Using Passive Microwave Imager Data—2005 Testing Dr. Daniel J. Cecil Dr. Thomas A. Jones University of Alabama in Huntsville
Fleet Numerical… Supercomputing Excellence for Fleet Safety and Warfighter Decision Superiority… 1 Chuck Skupniewicz Models (N34M) FNMOC Operations Dept.
M. Fiorino :: IHC 61 st NOLA Performance of the ECMWF High- Resolution Global Model during the 2006 Northern Hemisphere Season and Impact on CONsensus.
Andrea Schumacher, CIRA/CSU Mark DeMaria and John Knaff, NOAA/NESDIS/StAR.
Development and Implementation of NHC/JHT Products in ATCF Charles R. Sampson NRL (PI) Contributors: Ann Schrader, Mark DeMaria, John Knaff, Chris Sisko,
Impact of New Predictors on Corrected Consensus TC Track Forecast Error James S. Goerss Innovative Employee Solutions / NRL Monterey March 7,
National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.
National Hurricane Center 2010 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 NOAA Hurricane.
New Tropical Cyclone Intensity Forecast Tools for the Western North Pacific Mark DeMaria and John Knaff NOAA/NESDIS/RAMMB Andrea Schumacher, CIRA/CSU.
National Hurricane Center 2007 Forecast Verification Interdepartmental Hurricane Conference 3 March 2008 James L. Franklin NHC/TPC.
Objective and Automated Assessment of Operational Global Forecast Model Predictions of Tropical Cyclone Formation Patrick A. Harr Naval Postgraduate School.
Munehiko Yamaguchi 12, Takuya Komori 1, Takemasa Miyoshi 13, Masashi Nagata 1 and Tetsuo Nakazawa 4 ( ) 1.Numerical Prediction.
M. Fiorino :: 63 rd IHC St. Petersburg, FL Recent trends in dynamical medium- range tropical cyclone track prediction and the role of resolution.
Developing a tropical cyclone genesis forecast tool: Preliminary results from 2014 quasi- operational testing Daniel J. Halperin 1, Robert E. Hart 1, Henry.
Hurricane Joaquin Frank Marks AOML/Hurricane Research Division 10 May 2016 Frank Marks AOML/Hurricane Research Division 10 May 2016 Research to Improve.
Prediction of Consensus Tropical Cyclone Track Forecast Error ( ) James S. Goerss NRL Monterey March 1, 2011.
A Few Words on Hurricane Forecasts
Presentation transcript:

2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC

Verification Rules System must be a tropical (or subtropical) cyclone at both the forecast time and verification time, includes depression stage (except as noted). Verification results are final (until we change something). Special advisories ignored; regular advisories verified. Skill baselines for track is revised CLIPER5 (updated developmental data to [ATL] and [EPAC]), run post-storm on operational compute data. Skill baseline for intensity is the new decay-SHIFOR5 model, run post-storm on operational compute data (OCS5). Minimum D-SHIFOR5 forecast is 15 kt. New interpolated version of the GFDL: GHMI. Previous GFDL intensity forecast is lagged 6 h as always, but the offset is not applied at or beyond 30 h. Half the offset is applied at 24 h. Full offset applied at 6-18 h. ICON now uses GHMI.

Decay SHIFOR5 Model Begin by running regular SHIFOR5 model. Apply the DeMaria module to adjust intensity of tropical cyclones for decay over land. This includes recent adjustments for less decay over skinny landmasses (estimates fraction of the circulation over land). Algorithm requires a forecast track. For a skill baseline, CLIPER5 track is used. (OFCI could be used if the intent was to provide guidance).

2006 Atlantic Verification VT NT TRACK INT (h) (n mi) (kt) ============================ Values in green meet or exceed all-time records. * 48 h track error for TS and H only was 96.6 n mi.

Track Errors by Storm

2006 vs. 5-Year Mean

New 5-Year Mean 55 n mi/day

OFCL Error Distributions

Errors cut in half since 1990

Mixed Bag of Skill

2006 Track Guidance (Top Tier)

2 nd Tier Early Models

2006 Late Models

Experimental NASA Model (FV5)

Guidance Trends

Goerss Corrected Consensus CCON 120 h FSP: 36%CGUN 120 h FSP: 33% Small improvements of 1-3%, but benefit lost by 5 days.

FSU Superensemble vs Goerss Corrected Consensus

FSU Superensemble vs Other Consensus Models

2006 vs 5-Year Mean

No progress with intensity

Skill sinking faster than dry air over the Atlantic

Intensity Guidance

Dynamical Intensity Guidance Finally Surpasses Statistical Guidance

Intensity Error Distribution When there are few rapid-intensifiers, OFCL forecasts have a substantial high bias. GHMI had larger positive biases, but higher skill (i.e., smaller but one-sided errors).

FSU Superensemble vs Other Consensus Models

2006 East Pacific Verification VT N Trk Int (h) (n mi) (kt) ======================== Values in green represent all-time lows.

2006 vs 5-Year Mean

Errors cut by 1/3 since 1990

OFCL Error Distributions

Skill trend noisy but generally upward

2006 Track Guidance (1 st tier) Larger separation between dynamical and consensus models (model errors more random, less systematic).

2 nd Tier

FSU Superensemble vs Other Consensus Models

Relative Power of Multi-model Consensus n e = 1.65 n e = 2.4

2006 vs Long-term Mean

Same as it ever was…

…same as it ever was.

2006 Intensity Guidance

FSU Superensemble vs Other Consensus Models

Summary Atlantic Basin - Track OFCL track errors set records for accuracy from h. Mid-range skill appears to be trending upwards. OFCL track errors set records for accuracy from h. Mid-range skill appears to be trending upwards. OFCL track forecasts were better than all the dynamical guidance models, but trailed the consensus models slightly. OFCL track forecasts were better than all the dynamical guidance models, but trailed the consensus models slightly. GFDL, GFS, and NOGAPS provided best dynamical track guidance at various times. UKMET trailed badly. No (early) dynamical model had skill at 5 days! GFDL, GFS, and NOGAPS provided best dynamical track guidance at various times. UKMET trailed badly. No (early) dynamical model had skill at 5 days! ECMWF performed extremely well, when it was available, especially at longer times. Small improvement in arrival time would result in many more EMXI forecasts. ECMWF performed extremely well, when it was available, especially at longer times. Small improvement in arrival time would result in many more EMXI forecasts. FSU super-ensemble not as good as Goerss corrected consensus, and no better than GUNA in a three-year sample. FSU super-ensemble not as good as Goerss corrected consensus, and no better than GUNA in a three-year sample.

Summary (2) Atlantic Basin - Intensity OFCL intensity errors were very close to the long- term mean, but skill levels dropped very sharply (i.e., even though Decay-SHIFOR errors were very low, OFCL errors did not decrease). The OFCL errors also trailed the GFDL and ICON guidance. OFCL intensity errors were very close to the long- term mean, but skill levels dropped very sharply (i.e., even though Decay-SHIFOR errors were very low, OFCL errors did not decrease). The OFCL errors also trailed the GFDL and ICON guidance. For the first time, dynamical intensity guidance beat statistical guidance. For the first time, dynamical intensity guidance beat statistical guidance. OFCL forecasts had a substantial high bias. Even though the GFDL had smaller errors than OFCL, its bias was larger. OFCL forecasts had a substantial high bias. Even though the GFDL had smaller errors than OFCL, its bias was larger. FSU super-ensemble no better than a simple average of GFDL and DSHP (three-year sample). FSU super-ensemble no better than a simple average of GFDL and DSHP (three-year sample).

Summary (3) East Pacific Basin - Track OFCL track errors up, skill down in 2006, although errors were slightly better than the long-term mean. OFCL track errors up, skill down in 2006, although errors were slightly better than the long-term mean. OFCL beat dynamical models, but not the consensus models. Much larger difference between dynamical models and the consensus in the EPAC (same as 2005). OFCL beat dynamical models, but not the consensus models. Much larger difference between dynamical models and the consensus in the EPAC (same as 2005). FSU super-ensemble no better than GUNA (two-year sample). FSU super-ensemble no better than GUNA (two-year sample).

Summary (4) East Pacific Basin - Intensity OFCL intensity errors/skill show little improvement. OFCL intensity errors/skill show little improvement. GFDL beat DSHP after 36 h, but ICON generally beat both. GFDL beat DSHP after 36 h, but ICON generally beat both. FSU super-ensemble slightly better than ICON at h, but worse than ICON after that. FSU super-ensemble slightly better than ICON at h, but worse than ICON after that.