Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida.

Similar presentations


Presentation on theme: "A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida."— Presentation transcript:

1 A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida Institute of Technology – Department of Marine and Environmental Systems March 6, 2008

2 2 Overview Introduction Introduction Methodology Methodology Results Results Conclusions Conclusions Questions Questions

3 3 Introduction Evaluate performance of Tropical Cyclone Wind Probability Forecast Product Evaluate performance of Tropical Cyclone Wind Probability Forecast Product –Issued by National Hurricane Center –Related Study: Dr. John Knaff and Dr. Mark DeMaria –Entire Atlantic Basin for 2006 Hurricane Season –This Project: –2004 – 2007 Hurricane Seasons –Florida East Coast: Jacksonville - Miami

4 Methodology 4 Data Collection Data Collection 2004 Hurricane Season 2004 Hurricane Season Provided by Dr. Mark DeMaria Provided by Dr. Mark DeMaria 2005 – 2007 Hurricane Seasons 2005 – 2007 Hurricane Seasons Provided by NHC Provided by NHC

5 5 Methodology cont’d… Determine if the wind speed criteria “occurred or “did not occur” during each forecast time interval for each event Determine if the wind speed criteria “occurred or “did not occur” during each forecast time interval for each event 12 h Forecast Interval: 0600z – 1800z on 09/04/2004 ≥34Kt occurred in: Cocoa Beach Fort Pierce West Palm Beach Miami ≥34Kt did not occur in: Jacksonville Daytona Beach

6 6 Methodology cont’d… Occurred Did Not Occur Forecast % ≥ Threshold HitFA Forecast % < Threshold MissCN Classification of Probability Forecasts Classification of Probability Forecasts “Hit”: Event Forecast to Occur, Event Occurred “Hit”: Event Forecast to Occur, Event Occurred “Miss”: Event Forecast Not to Occur, Event Occurred “Miss”: Event Forecast Not to Occur, Event Occurred “False Alarm”: Event Forecast to Occur, Event Did Not Occur “False Alarm”: Event Forecast to Occur, Event Did Not Occur “Correct Negative”: Event Forecast Not to Occur, Event Did Not Occur “Correct Negative”: Event Forecast Not to Occur, Event Did Not Occur Table 1: Contingency Table showing classification of each probability forecast

7 7 Methodology cont’d… Probability of Detection (POD): The fraction of the observed events that were correctly forecast Probability of Detection (POD): The fraction of the observed events that were correctly forecast Probability of False Detection (POFD) “False Alarm Rate”: A measure of the product’s ability to forecast non-events Probability of False Detection (POFD) “False Alarm Rate”: A measure of the product’s ability to forecast non-events Range: 0 to 1; Perfect Score: 1 Range: 0 to 1; Perfect Score: 0

8 8 Methodology cont’d… Occurred Did Not Occur Forecast % ≥ Threshold HitFA Forecast % < Threshold MissCN Table 1: Contingency Table showing classification of each probability forecast Probability of Detection: Probability of False Detection; “False Alarm Rate”:

9 9 100% 8%  “Optimal Threshold” 0%

10 10 Methodology cont’d… Occurred Did Not Occur Forecast % ≥ Threshold HitFA Forecast % < Threshold MissCN Classification of Probability Forecasts Classification of Probability Forecasts “Hit”: Event Forecast to Occur, Event Occurred “Hit”: Event Forecast to Occur, Event Occurred “Miss”: Event Forecast Not to Occur, Event Occurred “Miss”: Event Forecast Not to Occur, Event Occurred “False Alarm”: Event Forecast to Occur, Event Did Not Occur “False Alarm”: Event Forecast to Occur, Event Did Not Occur “Correct Negative”: Event Forecast Not to Occur, Event Did Not Occur “Correct Negative”: Event Forecast Not to Occur, Event Did Not Occur Table 1: Contingency Table showing classification of each probability forecast

11 11 Forecast Interval Threshold 12 h 5% 24 h 10% 36 h 10% 48 h 12% 72 h 9% 96 h 4% 120 h 2% Forecast Interval Threshold 12 h 2% 24 h 1% 36 h 4% 48 h 9% 72 h 2% 96 h 1% 120 h 1% Forecast Interval Threshold 12 h 1% 24 h 5% 36 h 10% 48 h 9% 72 h 4% 96 h 3% 120 h 1%

12 12 Statistics Probability of Detection Probability of Detection Probability of False Detection Probability of False Detection False Alarm Ratio False Alarm Ratio Threat Score Threat Score Bias Score Bias Score Accuracy Accuracy True Skill Statistic True Skill Statistic

13 13 Ranges Forecast Interval345064 (HR)(Kt) 12 100 24 999471 36 927147 48 764628 72 482513 96 402211 120 20106

14 14 Conclusions The product performs well The product performs well Product shows high accuracy (0.98 to 0.66) between 12-Hr and 120-Hr, respectively Product adequately distinguishes observed events from non- observed events: TSS ranges from 0.97 to 0.25 for the 12-Hr to 120-Hr, respectively Probabilities that may seem small or unimportant are actually significant Probabilities that may seem small or unimportant are actually significant

15 15 So What? Results will be useful for: Results will be useful for: Operational decisions at CCAFS and KSC Operational decisions at CCAFS and KSC Shuttle Rollback Shuttle Rollback Payload Protection Payload Protection Personnel Evacuation Personnel Evacuation Risk Management Risk Management Evaluating Cost-Risk-Benefit Ratios for evacuation decisions Evaluating Cost-Risk-Benefit Ratios for evacuation decisions Additional Areas of Interest: Additional Areas of Interest: Corpus Christi, TX Corpus Christi, TX New Orleans, LA New Orleans, LA Charleston, SC Charleston, SC Questions?


Download ppt "A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida."

Similar presentations


Ads by Google