Forecasting of Atlantic Tropical Cyclones Using a Kilo-Member Ensemble M.S. Defense Jonathan Vigh.

Slides:



Advertisements
Similar presentations
ECMWF long range forecast systems
Advertisements

Andrea Schumacher, CIRA/CSU Mark DeMaria, NOAA/NESDIS/StAR Dan Brown and Ed Rappaport, NHC.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Initialization Issues of Coupled Ocean-atmosphere Prediction System Climate and Environment System Research Center Seoul National University, Korea In-Sik.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
The Use of High Resolution Mesoscale Model Fields with the CALPUFF Dispersion Modelling System in Prince George BC Bryan McEwen Master’s project
GFDL Hurricane Model Ensemble Performance during the 2010 Atlantic Season 65 th IHC Miami, FL 01 March 2011 Tim Marchok Morris Bender NOAA / GFDL Acknowledgments:
HFIP Ensemble Products Subgroup Sept 2, 2011 Conference Call 1.
Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr.
Description and Preliminary Evaluation of the Expanded UW S hort R ange E nsemble F orecast System Maj. Tony Eckel, USAF University of Washington Atmospheric.
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
1 st UNSTABLE Science Workshop April 2007 Science Question 3: Science Question 3: Numerical Weather Prediction Aspects of Forecasting Alberta Thunderstorms.
HWRF Model Sensitivity to Non-hydrostatic Effects Hurricane Diagnostics and Verification Workshop May 4, 2009 Katherine S. Maclay Colorado State University.
Introduction to Numerical Weather Prediction and Ensemble Weather Forecasting Tom Hamill NOAA-CIRES Climate Diagnostics Center Boulder, Colorado USA.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
The Impact of GPS Radio Occultation Data on the Analysis and Prediction of Tropical Cyclones Bill Kuo UCAR.
Ensemble-variational sea ice data assimilation Anna Shlyaeva, Mark Buehner, Alain Caya, Data Assimilation and Satellite Meteorology Research Jean-Francois.
MPO 674 Lecture 9 2/10/15. Hypotheses 1.Singular Vector structures are dependent on the barotropic instability condition in the initial vortex.
Observing Strategy and Observation Targeting for Tropical Cyclones Using Ensemble-Based Sensitivity Analysis and Data Assimilation Chen, Deng-Shun 3 Dec,
ATMS 373C.C. Hennon, UNC Asheville Tropical Cyclone Forecasting Where is it going and how strong will it be when it gets there.
Short-Range Ensemble Prediction System at INM José A. García-Moya & Carlos Santos SMNT – INM COSMO Meeting Zurich, September 2005.
Improvements in Deterministic and Probabilistic Tropical Cyclone Surface Wind Predictions Joint Hurricane Testbed Project Status Report Mark DeMaria NOAA/NESDIS/ORA,
MPO 674 Lecture 20 3/26/15. 3d-Var vs 4d-Var.
Guidance on Intensity Guidance Kieran Bhatia, David Nolan, Mark DeMaria, Andrea Schumacher IHC Presentation This project is supported by the.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
Continued Development of Tropical Cyclone Wind Probability Products John A. Knaff – Presenting CIRA/Colorado State University and Mark DeMaria NOAA/NESDIS.
Pablo Santos WFO Miami, FL Mark DeMaria NOAA/NESDIS David Sharp WFO Melbourne, FL rd IHC St Petersburg, FL PS/DS “HURRICANE CONDITIONS EXPECTED.”
Large Ensemble Tropical Cyclone Forecasting K. Emanuel 1 and Ross N. Hoffman 2, S. Hopsch 2, D. Gombos 2, and T. Nehrkorn 2 1 Massachusetts Institute of.
A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida.
An Improved Wind Probability Program: A Year 2 Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
An Improved Wind Probability Program: A Joint Hurricane Testbed Project Update Mark DeMaria and John Knaff, NOAA/NESDIS, Fort Collins, CO Stan Kidder,
A Comparison of Two Microwave Retrieval Schemes in the Vicinity of Tropical Storms Jack Dostalek Cooperative Institute for Research in the Atmosphere,
Tropical Cyclone Motion
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
On the ability of global Ensemble Prediction Systems to predict tropical cyclone track probabilities Sharanya J. Majumdar and Peter M. Finocchio RSMAS.
National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology Pasadena, California 1 J. Teixeira(1), C. A.
Caribbean Disaster Mitigation Project Caribbean Institute for Meteorology and Hydrology Tropical Cyclones Characteristics and Forecasting Horace H. P.
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
61 st IHC, New Orleans, LA Verification of the Monte Carlo Tropical Cyclone Wind Speed Probabilities: A Joint Hurricane Testbed Project Update John A.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Data assimilation, short-term forecast, and forecasting error
Development of Probabilistic Forecast Guidance at CIRA Andrea Schumacher (CIRA) Mark DeMaria and John Knaff (NOAA/NESDIS/ORA) Workshop on AWIPS Tools for.
Munehiko Yamaguchi Typhoon Research Department, Meteorological Research Institute of the Japan Meteorological Agency 9:00 – 12: (Thr) Topic.
POTENTIAL THESIS TOPICS Professor Russell L. Elsberry January 26, 2006 Graduate School of Engineering and Applied Sciences Department of Meteorology, Naval.
Track Forecasting of 2001 Atlantic Tropical Cyclones Using a Kilo-Member Ensemble 8:30 AM April 30, 2002 Jonathan Vigh Master’s Student Colorado State.
Typhoon Forecasting and QPF Technique Development in CWB Kuo-Chen Lu Central Weather Bureau.
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
Exploring Multi-Model Ensemble Performance in Extratropical Cyclones over Eastern North America and the Western Atlantic Ocean Nathan Korfe and Brian A.
Development of a Rapid Intensification Index for the Eastern Pacific Basin John Kaplan NOAA/AOML Hurricane Research Division Miami, FL and Mark DeMaria.
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
Deutscher Wetterdienst Preliminary evaluation and verification of the pre-operational COSMO-DE Ensemble Prediction System Susanne Theis Christoph Gebhardt,
On the Challenges of Identifying the “Best” Ensemble Member in Operational Forecasting David Bright NOAA/Storm Prediction Center Paul Nutter CIMMS/Univ.
Doppler Lidar Winds & Tropical Cyclones Frank D. Marks AOML/Hurricane Research Division 7 February 2007.
COMPARISONS OF NOWCASTING TECHNIQUES FOR OCEANIC CONVECTION Huaqing Cai, Cathy Kessinger, Nancy Rehak, Daniel Megenhardt and Matthias Steiner National.
Andrea Schumacher, CIRA/CSU Mark DeMaria and John Knaff, NOAA/NESDIS/StAR.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
Munehiko Yamaguchi 12, Takuya Komori 1, Takemasa Miyoshi 13, Masashi Nagata 1 and Tetsuo Nakazawa 4 ( ) 1.Numerical Prediction.
Figures from “The ECMWF Ensemble Prediction System”
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
2. WRF model configuration and initial conditions  Three sets of initial and lateral boundary conditions for Katrina are used, including the output from.
Improving Numerical Weather Prediction Using Analog Ensemble Presentation by: Mehdi Shahriari Advisor: Guido Cervone.
Advisor: Dr. Fuqing Zhang
A Guide to Tropical Cyclone Guidance
Plans for Met Office contribution to SMOS+STORM Evolution
An Analysis of Large Track Error North Atlantic Tropical Cyclones.
A Review of the CSTAR Ensemble Tools Available for Operations
Storm Surge Modeling and Forecasting
Verification of Tropical Cyclone Forecasts
Presentation transcript:

Forecasting of Atlantic Tropical Cyclones Using a Kilo-Member Ensemble M.S. Defense Jonathan Vigh

Acknowledgements Graduate Adviser: Dr. Wayne Schubert Master’s Committee Dr. Mark DeMaria Dr. William Gray Dr. Gerald Taylor Dr. Scott Fulton (MUDBAR) Schubert Research Group Data Sources: NCEP and TPC/NHC Mary Haley and NCL Developers Funding: Fellowship Support from Significant Opportunities in Atmospheric Research and Science Program (UCAR/NSF) and the American Meteorological Society NSF Grant ATM , NSF Grant ATM , NASA/CAMEX Grant NAG , and NOAA Grant NA17RJ1228

Outline The Big Picture Background The MUDBAR Model Design of a Kilo-Member Ensemble Postprocessing and Verification Results Case Studies Conclusions

Why study track? Major improvements in official track errors 72-h Official Track Forecast Errors -1.9% per year from % per year from Societal vulnerability increasing faster (e.g. Mitch, evacuation times) Even with accurate forecasts of intensity, wind field, rain – all for naught if the track is wrong

It’s Chaos Out There! The idea behind a forecast Perfect models and perfect initializations The nefarious atmosphere Error saturation and predictability limits Much of the track errors come from the major forecast errors of storms that follow erratic tracks Would be good to know in advance before large errors occur

Predictability Limits for a Barotropic Model (Leslie et al. 1998) (nm) Inherent Practical

Ensemble Background Definition: Any set of forecasts that verify at the same time. Idea is to simulate the sources of uncertainty present in the forecast problem Uncertainty in the initial state Uncertainty in the model Theory dictates that the mean forecast of a well- perturbed ensemble should perform better than any comparable single deterministic forecast

Types of Ensembles Monte Carlo simulations Lagged-average Forecasting Multimodel Consensus (Poor Man’s Ensemble) Dynamically constrained methods: Breeding of Growing Modes Singular Vector Decomposition

Questions and the thesis: Can a well-perturbed ensemble mean give a better forecast than any single realization? How many ensemble members are necessary to give the “right” answer? Is there a relationship between ensemble spread and forecast error? Can this relationship be used to provide meaningful forecasts of forecast skill? How accurately does the ensemble envelope of all track possibilities encompass the actual observed track?

The MUDBAR Model The nondivergent modified barotropic equation model (MUDBAR) of Scott Fulton Data enter the model through the initial condition (specify q) and the time- dependent boundary conditions (specify ψ on boundary, q on inflow)

Model Setup (Vigh et al. 2003) 6000-km square domain Optimized 3 grid configuration, 32 x 32 grid points Mesh spacing: 194, 97, and 48 km Each 120-h forecast takes 1.4 s on a 1 GHz PC (entire ensemble runs in ~1 h) Is able to reproduce the accuracy of the shallow water LBAR model

Bogussing Procedure The vortex profile of DeMaria (1987); Chan and Williams (1987): This bogus vortex is blended with the GFS initial wind field at the operationally-estimated storm position with the appropriate motion vector:

Ensemble Design Simple parameter-based perturbation methodology (fixed) Number and magnitudes of perturbations in each class chosen based on sensitivity experiments Five perturbations classes: 11 environmental perturbations (NCEP GFS ensemble) 1 control forecast 10 perturbed forecasts 4 perturbations to the depth of the layer-mean averaging of the wind very deep layer mean (1000 hPa – 100 hPa) standard deep layer mean (850 hPa – 200 hPa) Moderate depth layer mean (850 hPa – 350 hPa) Shallow depth layer mean(850 hPa – 500 hPa)

Ensemble Design, cont’d 3 perturbations to the model’s equivalent phase speed 300 m/sappropriate for Subtropical Highs 150 m/smiddle of the road 50 m/sappropriate for convective systems 3 perturbations to the bogus vortex size (V m ) V m = 15 m/ssmall vortex V m = 30 m/smedium-size vortex V m = 50 m/slarge vortex 5 perturbations to the storm motion vector All perturbations are cross multiplied to get an ensemble of: 11 x 4 x 3 x 3 x 5 = 1980 members! The Kilo-Ensemble

Postprocessing 1980 individual member forecasts – what to do now? Total ensemble mean (ZTOT), spread 20% cutoff used Subensemble means (for each perturbation), spread Calculation of spatial strike probabilities Value of probabilistic forecasting: Probabilities don’t hedge  The high tomorrow will be Capture the entire essence of the ensemble forecast

Verification Murphy (1993) talks about 3 types of ‘goodness’ for forecasts Consistency Quality Value Job of verification is to measure goodness Measures-oriented methods Distribution-oriented methods

Verification Procedures 293 cases from roughly 50 storms from the Atlantic Hurricane Seasons Only tropical and subtropical cases included All seasonal statistics are homogeneous Statistics calculated for the total ensemble mean and subensemble mean track forecasts: Mean track error x-bias y-bias Skill relative to CLIPER Frequency of superior performance

Other measures of ensemble performance Reliability of the ensemble envelope The outer envelope (0%) contained the retained the verification 80% of the time at 72-h, and 66% at 120-h Reliability of the spatial probabilities Spread vs. error relationship Large spread -> large error Small spread -> small error

Conclusions Ensemble mean forecast did not outperform the control forecast Ensemble strike probabilities seem within the realm of reality (reliability plot) Weak relationship between spread and error peaks at 60-h -> can estimate forecast skill Validity of barotropic model decreases at around 84- h, just as the benefits of the GFS environmental perturbations start to kick in

Questions: Can a well-perturbed ensemble mean give a better forecast than any single realization? How many ensemble members are necessary to give the “right” answer? Is there a relationship between ensemble spread and forecast error? Can this relationship be used to provide meaningful forecasts of forecast skill? How accurately does the ensemble envelope of all track possibilities encompass the actual observed track?

Possible reasons for performance degradation Reasons for poor ensemble performance: Barotropic dynamics are too simple Artificial edge biases Poor design – fixed perturbations not too good Spurious binary interactions between bogus vortex and GFS-analyzed vortex

Future Work Immediate future work (before Miami) Verify the strike probabilities using the Brier and the ROC scores Calculate a 26-member ensemble from just the 26 perturbations (without cross multiplication) Derive and verify cluster analysis forecasts Determine extent and effect of the binary interactions

Future Work (cont’d) Select an optimal subensemble for the particular forecast situation (error recycling) Redesign the ensemble to use relative perturbations Compare to other ensembles for track forecasting (GFS, GUNA, ECMWS, etc.)

Questions