61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.

Slides:



Advertisements
Similar presentations
Verification of Probabilistic Forecast J.P. Céron – Direction de la Climatologie S. Mason - IRI.
Advertisements

SIX INTERNATIONAL WORKSHOP ON TROPICAL CYCLONES 2006 IWTC-VI SAN JOSE, COSTA RICA NOVEMBER 2006 TOPIC 0.1 QUANTITATIVE FORECASTS OF TROPICAL CYCLONES LANDFALL.
ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Guidance of the WMO Commission for CIimatology on verification of operational seasonal forecasts Ernesto Rodríguez Camino AEMET (Thanks to S. Mason, C.
14 May 2001QPF Verification Workshop Verification of Probability Forecasts at Points WMO QPF Verification Workshop Prague, Czech Republic May 2001.
Andrea Schumacher, CIRA/CSU Mark DeMaria, NOAA/NESDIS/StAR Dan Brown and Ed Rappaport, NHC.
Applications of Ensemble Tropical Cyclone Products to National Hurricane Center Forecasts and Warnings Mark DeMaria, NOAA/NESDIS/STAR, Ft. Collins, CO.
Verification of probability and ensemble forecasts
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Probabilistic Verification of Ensemble Forecasts of Tropical Cyclogenesis Sharanya J. Majumdar RSMAS / University of Miami Ryan D. Torn SUNY at Albany.
HFIP Ensemble Products Subgroup Sept 2, 2011 Conference Call 1.
Pablo Santos WFO Miami, FL Mark DeMaria NOAA/NESDIS David Sharp WFO Melbourne, FL 2010 AMS Annual Meeting, Atlanta, GA PS/MM/DS Determining Optimal Thresholds.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Advanced Applications of the Monte Carlo Wind Probability Model: A Year 1 Joint Hurricane Testbed Project Update Mark DeMaria 1, Stan Kidder 2, Robert.
Advanced Applications of the Monte Carlo Wind Probability Model: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria 1, Robert DeMaria 2, Andrea.
A. Schumacher, CIRA/Colorado State University NHC Points of Contact: M. DeMaria, D. Brown, M. Brennan, R. Berg, C. Ogden, C. Mattocks, and C. Landsea Joint.
Improvements in Deterministic and Probabilistic Tropical Cyclone Wind Predictions: A Joint Hurricane Testbed Project Update Mark DeMaria and Ray Zehr NOAA/NESDIS/ORA,
Andrea Schumacher 1, Mark DeMaria 2, John Knaff 3, Liqun Ma 4 and Hazari Syed 5 1 CIRA, Colorado State University, Fort Collins, Colorado 2 NOAA/NWS/NHC,
The Impact of Satellite Data on Real Time Statistical Tropical Cyclone Intensity Forecasts Joint Hurricane Testbed Project Mark DeMaria, NOAA/NESDIS/ORA,
New and Updated Operational Tropical Cyclone Wind Products John A. Knaff – NESDIS/StAR - RAMMB, Fort Collins, CO Alison Krautkramer – NCEP/TPC - NHC, Miami,
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff, CIRA/CSU,
Development of a Real-Time Automated Tropical Cyclone Surface Wind Analysis: Development of a Real-Time Automated Tropical Cyclone Surface Wind Analysis:
Verification of the Cooperative Institute for Precipitation Systems‘ Analog Guidance Probabilistic Products Chad M. Gravelle and Dr. Charles E. Graves.
Mark DeMaria, NOAA/NESDIS/StAR Andrea Schumacher, CIRA/CSU Dan Brown and Ed Rappaport, NHC HFIP Workshop, 4-8 May 2009.
Improvements in Deterministic and Probabilistic Tropical Cyclone Surface Wind Predictions Joint Hurricane Testbed Project Status Report Mark DeMaria NOAA/NESDIS/ORA,
Hurricane lecture for KMA Ed Szoke 1 October 20, 2004 Overview of 2004 Atlantic Hurricane Season Ed Szoke* NOAA Forecast Systems Laboratory Forecast Research.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
Pablo Santos WFO Miami, FL Mark DeMaria NOAA/NESDIS David Sharp WFO Melbourne, FL rd IHC St Petersburg, FL PS/DS “HURRICANE CONDITIONS EXPECTED.”
A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida.
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
An Improved Wind Probability Program: A Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
J. Kossin, 67th IHC, Mar 2013 James Kossin 1,2, William Lewis 2, Matthew Sitkowski 2, and Christopher Rozoff 2 1 NOAA’s National Climatic Data Center,
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
A. FY12-13 GIMPAP Project Proposal Title Page version 25 October 2011 Title: Combining Probabilistic and Deterministic Statistical Tropical Cyclone Intensity.
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Improving SHIPS Rapid Intensification (RI) Index Using 37 GHz Microwave Ring Pattern around the Center of Tropical Cyclones 65 th Interdepartmental Hurricane.
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
Tropical cyclone products and product development at CIRA/RAMMB Presented by Cliff Matsumoto CIRA/CSU with contributions from Andrea Schumacher (CIRA),
Center for Satellite Applications and Research (STAR) Review 09 – 11 March 2010 Image: MODIS Land Group, NASA GSFC March 2000 Improving Hurricane Intensity.
Hydrometeorological Prediction Center HPC Experimental PQPF: Method, Products, and Preliminary Verification 1 David Novak HPC Science and Operations Officer.
Probabilistic Forecasting. pdfs and Histograms Probability density functions (pdfs) are unobservable. They can only be estimated. They tell us the density,
NHC/JHT Products in ATCF Buck Sampson (NRL, Monterey) and Ann Schrader (SAIC, Monterey) IHC 2007 Other Contributors: Chris Sisko, James Franklin, James.
Development of Probabilistic Forecast Guidance at CIRA Andrea Schumacher (CIRA) Mark DeMaria and John Knaff (NOAA/NESDIS/ORA) Workshop on AWIPS Tools for.
Item 12-20: 2013 Project Update on Expressions of Uncertainty Initiative.
The Impact of Lightning Density Input on Tropical Cyclone Rapid Intensity Change Forecasts Mark DeMaria, John Knaff and Debra Molenar, NOAA/NESDIS, Fort.
Stream 1.5 Runs of SPICE Kate D. Musgrave 1, Mark DeMaria 2, Brian D. McNoldy 1,3, and Scott Longmore 1 1 CIRA/CSU, Fort Collins, CO 2 NOAA/NESDIS/StAR,
Can Dvorak Intensity Estimates be Calibrated? John A. Knaff NOAA/NESDIS Fort Collins, CO.
Tropical Cyclone Rapid Intensity Change Forecasting Using Lightning Data during the 2010 GOES-R Proving Ground at the National Hurricane Center Mark DeMaria.
THE NESDIS TROPICAL CYCLONE FORMATION PROBABILITY PRODUCT: PAST PERFORMANCE AND FUTURE PLANS Andrea B. Schumacher, CIRA Mark DeMaria, NESDIS/StAR John.
Verification of ensemble systems Chiara Marsigli ARPA-SIMC.
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007 James L. Franklin NHC/TPC.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Andrea Schumacher, CIRA/CSU Mark DeMaria and John Knaff, NOAA/NESDIS/StAR.
Developing an Inner-Core SST Cooling Parameter for use in SHIPS Principal Investigator: Joseph J. Cione NOAA’s Hurricane Research Division Co-Investigators:
Development and Implementation of NHC/JHT Products in ATCF Charles R. Sampson NRL (PI) Contributors: Ann Schrader, Mark DeMaria, John Knaff, Chris Sisko,
National Hurricane Center 2009 Forecast Verification James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2009 NOAA Hurricane.
Objective and Automated Assessment of Operational Global Forecast Model Predictions of Tropical Cyclone Formation Patrick A. Harr Naval Postgraduate School.
Mark DeMaria and John A. Knaff - NOAA/NESDIS/RAMMB, Fort Collins, CO
Uncertainty cones deduced from an ensemble prediction system
Verifying and interpreting ensemble products
Probabilistic forecasts
COSMO-LEPS Verification
Can we distinguish wet years from dry years?
Verification of SPE Probability Forecasts at SEPC
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A. Knaff and Mark DeMaria NOAA - NESDIS - Office of Research and Applications, Fort Collins, CO Chris Lauer NOAA - NWS – Tropical Prediction Center, Miami, FL

61 st IHC, New Orleans, LA TPC Monte Carlo Wind Probabilities Hurricane Gordon Hurricane Helene

61 st IHC, New Orleans, LA Motivation for Verification Assess how good the current wind probability algorithm performs Assess future improvements to the wind probability algorithm Another way to track the performance of the deterministic forecasts

61 st IHC, New Orleans, LA How good are the Monte Carlo wind probabilities? QUESTIONS How does the average forecast magnitude compare to the average observed magnitude? What is the magnitude of the probability forecast errors? What is the relative skill of the probabilistic forecast over the deterministic forecasts, in terms of predicting whether or not an event occurred? How well do the predicted probabilities of an event correspond to their observed frequencies? What is the ability of the forecast to discriminate between events and non-events? STATISTICS THAT ANSWER … Multiplicative Bias (half) Brier Score Brier Skill Score Reliability Diagrams Relative Operating Characteristics

61 st IHC, New Orleans, LA 34-kt Probability Grids Hurricane Gordon Hurricane Helene

61 st IHC, New Orleans, LA 34-kt OFCL Forecast Hurricane Gordon Hurricane Helene

61 st IHC, New Orleans, LA 34-kt Best Track Hurricane Gordon Hurricane Helene

61 st IHC, New Orleans, LA How good are the Monte Carlo wind probabilities? QUESTIONS How does the average forecast magnitude compare to the average observed magnitude? STATISTICS THAT ANSWER … Multiplicative Bias

61 st IHC, New Orleans, LA Results: Multiplicative Biases Atlantic, July 18 – October 4, 2006; 105W – 1W, 1N – 60N Range: minus infinity to infinity. Perfect score: 1

61 st IHC, New Orleans, LA How good are the Monte Carlo wind probabilities? QUESTIONS How does the average forecast magnitude compare to the average observed magnitude? What is the magnitude of the probability forecast errors? What is the relative skill of the probabilistic forecast over the deterministic forecasts, in terms of predicting whether or not an event occurred? How well do the predicted probabilities of an event correspond to their observed frequencies? What is the ability of the forecast to discriminate between events and non-events? STATISTICS THAT ANSWER … Multiplicative Bias (half) Brier Score Brier Skill Score Reliability Diagrams Relative Operating Characteristics

61 st IHC, New Orleans, LA Results: Brier Skill Score Range: minus infinity to 1, 0 indicates no skill when compared to the reference forecast. Perfect score: 1. Atlantic, July 18 – October 4, 2006; 105W – 1W, 1N – 60N

61 st IHC, New Orleans, LA How good are the Monte Carlo wind probabilities? QUESTIONS How does the average forecast magnitude compare to the average observed magnitude? What is the magnitude of the probability forecast errors? What is the relative skill of the probabilistic forecast over the deterministic forecasts, in terms of predicting whether or not an event occurred? How well do the predicted probabilities of an event correspond to their observed frequencies? What is the ability of the forecast to discriminate between events and non-events? STATISTICS THAT ANSWER … Multiplicative Bias (half) Brier Score Brier Skill Score Reliability Diagrams Relative Operating Characteristics

61 st IHC, New Orleans, LA Reliability Diagrams Observed frequency of events Bins of Forecast Probabilities

61 st IHC, New Orleans, LA Results: Reliability Diagrams Atlantic, June 17 – October 4, 2006; 105W – 1W, 1N – 60N

61 st IHC, New Orleans, LA How good are the Monte Carlo wind probabilities? QUESTIONS How does the average forecast magnitude compare to the average observed magnitude? What is the magnitude of the probability forecast errors? What is the relative skill of the probabilistic forecast over the deterministic forecasts, in terms of predicting whether or not an event occurred? How well do the predicted probabilities of an event correspond to their observed frequencies? What is the ability of the forecast to discriminate between events and non-events? STATISTICS THAT ANSWER … Multiplicative Bias (half) Brier Score Brier Skill Score Reliability Diagrams Relative Operating Characteristics

61 st IHC, New Orleans, LA Results: ROC Diagram 2% 5% 10% No skill skill

61 st IHC, New Orleans, LA Results: ROC Skill Score Range: -1 to 1, 0 indicates no skill. Perfect score: 1.

61 st IHC, New Orleans, LA Summary Based upon a preliminary verification of Atlantic Monte Carlo Wind Probabilities Probabilities have small negative biases Probabilities are more skillful than the deterministic (i.e. OFCL) in determining if an event will happen Predicted probabilities appear well related to observed frequencies The Probabilities are skillful in their ability to discriminate between “events” and “non-events”

61 st IHC, New Orleans, LA Next Complete the verification of the wind probabilities in the Atlantic, East Pacific, Central Pacific, and West Pacific Summarize results and identify potential problems with the Monte Carlo software Deliver the verification code to TPC

61 st IHC, New Orleans, LA Future Plans Begin our new project to use the model forecast spread [i.e., the Goerss Predicted Consensus Error (GPCE)] to better estimate track error distributions used for the Monte Carlo integration Assist, where possible, M. Mainelli (TPC) on the understanding the verification of the wind probabilities at the watch/warning break points

61 st IHC, New Orleans, LA Multiplicative Biases (Multiplicative) bias - Answers the question: How does the average forecast magnitude compare to the average observed magnitude? Range: minus infinity to infinity. Perfect score: 1.

61 st IHC, New Orleans, LA Brier Score (half) Brier score - Answers the question: What is the magnitude of the probability forecast errors? Range: 0 to 1. Perfect score: 0.

61 st IHC, New Orleans, LA Brier Skill Score Brier skill score - Answers the question: What is the relative skill of the probabilistic forecast over that of climatology, in terms of predicting whether or not an event occurred? Range: minus infinity to 1, 0 indicates no skill when compared to the reference forecast. Perfect score: 1.

61 st IHC, New Orleans, LA Statistical Methods: Relative Operating Characteristics Forecasts (contingent on the forecast probability) ObservationWarning (W)No Warning (W ’ )Total Event (E)hmE Nonevent (E ’ )fcE’E’ Totalww’w’ N A series of 2x2 contingency tables, which are conditional on a range of forecast probabilities are constructed. For instance a warning would be issued if the probability exceeded 1,2,3,4,…100 % The results of the contingency tables can be quantified in terms of hit rate (hr) = h/(h+m) and false-alarm rate (far)=f/(f+c) A plot of far vs. hr can be created and a skill score created from the area under the curve.

61 st IHC, New Orleans, LA Statistical Methods: ROC diagram & Skill Score,where A is the area under the curve Mason and Graham (1999) skill No skill ROC Skill Score