Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr.

Slides:



Advertisements
Similar presentations
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Advertisements

5 th International Conference of Mesoscale Meteor. And Typhoons, Boulder, CO 31 October 2006 National Scale Probabilistic Storm Forecasting for Aviation.
Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Improving Excessive Rainfall Forecasts at HPC by using the “Neighborhood - Spatial Density“ Approach to High Res Models Michael Eckert, David Novak, and.
Validation of the Ensemble Tropical Rainfall Potential (eTRaP) for Landfalling Tropical Cyclones Elizabeth E. Ebert Centre for Australian Weather and Climate.
NWP Verification with Shape- matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown 1, Edward Tollerud 2, Tara Jensen 1,
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
The Consideration of Noise in the Direct NWP Model Output Susanne Theis Andreas Hense Ulrich Damrath Volker Renner.
¿How sensitive are probabilistic precipitation forecasts to the choice of the ensemble generation method and the calibration algorithm? Juan J. Ruiz 1,2.
Juan Ruiz 1,2, Celeste Saulo 1,2, Soledad Cardazzo 1, Eugenia Kalnay 3 1 Departamento de Cs. de la Atmósfera y los Océanos (FCEyN-UBA), 2 Centro de Investigaciones.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Tara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3 1 NCAR/RAL and NOAA/GSD, Boulder Colorado, USA 2 NOAA/NWS/Storm Prediction Center,
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Evaluation of Potential Performance Measures for the Advanced Hydrologic Prediction Service Gary A. Wick NOAA Environmental Technology Laboratory On Rotational.
Evaluation and Comparison of Multiple Convection-Allowing Ensembles Examined in Recent HWT Spring Forecasting Experiments Israel Jirak, Steve Weiss, and.
Using Short Range Ensemble Model Data in National Fire Weather Outlooks Sarah J. Taylor David Bright, Greg Carbin, Phillip Bothwell NWS/Storm Prediction.
Warn on Forecast Briefing September 2014 Warn on Forecast Brief for NCEP planning NSSL and GSD September 2014.
Toward a 4D Cube of the Atmosphere via Data Assimilation Kelvin Droegemeier University of Oklahoma 13 August 2009.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
Measuring forecast skill: is it real skill or is it the varying climatology? Tom Hamill NOAA Earth System Research Lab, Boulder, Colorado
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
“New tools for the evaluation of convective scale ensemble systems” Seonaid Dey Supervisors: Bob Plant, Nigel Roberts and Stefano Migliorini.
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
. Outline  Evaluation of different model-error schemes in the WRF mesoscale ensemble: stochastic, multi-physics and combinations thereof  Where is.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
Use of Mesoscale Ensemble Weather Predictions to Improve Short-Term Precipitation and Hydrological Forecasts Michael Erickson 1, Brian A. Colle 1, Jeffrey.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
Spatial Verification Methods for Ensemble Forecasts of Low-Level Rotation in Supercells Patrick S. Skinner 1, Louis J. Wicker 1, Dustan M. Wheatley 1,2,
Typhoon Forecasting and QPF Technique Development in CWB Kuo-Chen Lu Central Weather Bureau.
Probabilistic Forecasts of Extreme Precipitation Events for the U.S. Hazards Assessment Kenneth Pelman 32 nd Climate Diagnostics Workshop Tallahassee,
Generate ξ m,i for each direction i given H, σ 1 and m (Eq. 2) calculate X’ m,i for each direction i (Eq. 1) given ξ m,i and X m, which corresponds for.
Convective-Scale Numerical Weather Prediction and Data Assimilation Research At CAPS Ming Xue Director Center for Analysis and Prediction of Storms and.
Statistical Postprocessing of Surface Weather Parameters Susanne Theis Andreas Hense Ulrich Damrath Volker Renner.
Deutscher Wetterdienst Preliminary evaluation and verification of the pre-operational COSMO-DE Ensemble Prediction System Susanne Theis Christoph Gebhardt,
Common verification methods for ensemble forecasts
An Ensemble Primer NCEP Ensemble Products By Richard H. Grumm National Weather Service State College PA and Paul Knight The Pennsylvania State University.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Fly - Fight - Win 2 d Weather Group Mr. Evan Kuchera HQ AFWA 2 WXG/WEA Template: 28 Feb 06 Approved for Public Release - Distribution Unlimited AFWA Ensemble.
Comparison of Convection-permitting and Convection-parameterizing Ensembles Adam J. Clark – NOAA/NSSL 18 August 2010 DTC Ensemble Testbed (DET) Workshop.
Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Ryan.
DET Module 1 Ensemble Configuration Linda Wharton 1, Paula McCaslin 1, Tara Jensen 2 1 NOAA/GSD, Boulder, CO 2 NCAR/RAL, Boulder, CO 3/8/2016.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
The Quantitative Precipitation Forecasting Component of the 2011 NOAA Hazardous Weather Testbed Spring Experiment David Novak 1, Faye Barthold 1,2, Mike.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
CAPS Realtime 4-km Multi-Model Convection-Allowing Ensemble and 1-km Convection-Resolving Forecasts for the NOAA Hazardous Weather Testbed (HWT) 2009 Spring.
Application of object-oriented verification techniques to ensemble precipitation forecasts William A. Gallus, Jr. Iowa State University June 5, 2009.
Xuexing Qiu and Fuqing Dec. 2014
A few examples of heavy precipitation forecast Ming Xue Director
LEPS VERIFICATION ON MAP CASES
Hydrometeorological Predication Center
Systematic timing errors in km-scale NWP precipitation forecasts
Statistical Methods for Model Evaluation – Moving Beyond the Comparison of Matched Observations and Output for Model Grid Cells Kristen M. Foley1, Jenise.
Precipitation Products Statistical Techniques
Update on the Status of Numerical Weather Prediction
CAPS Real-time Storm-Scale EnKF Data Assimilation and Forecasts for the NOAA Hazardous Weather Testbed Spring Forecasting Experiments: Towards the Goal.
Probabilistic forecasts
COSMO-DE-EPS Susanne Theis, Christoph Gebhardt, Michael Buchhold,
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Verification of Tropical Cyclone Forecasts
SUPERCELL PREDICTABILITY:
Presentation transcript:

Improving Probabilistic Ensemble Forecasts of Convection through the Application of QPF-POP Relationships Christopher J. Schaffer 1 William A. Gallus Jr. 2 Moti Segal 2 1 National Weather Service, WFO Goodland 2 Iowa State University, Ames, IA

Ensemble vs. Deterministic Probabilistic forecasts provide uncertainty Small errors in forecast’s initial conditions grow exponentially (Hamill and Colucci 1997) Ensemble mean forecasts tend to be more skillful (Smith and Mullen 1993, Ebert 2001, Chakraborty and Krishnamurti 2006)

Gallus and Segal (2004) and Gallus et al. (2007) Precipitation-binning technique for deterministic forecasts Larger forecasted precipitation => greater probability to receive precipitation POPs increased further if different models showed an intersection of grid points with rain in a bin

Overview of study Goals – Apply post-processing techniques similar to the Gallus and Segal (2004) technique to ensemble forecasts – Examine how the forecasts compare to those from more traditional approaches

Data NOAA Hazardous Weather Testbed (HWT) Spring Experiments (2007 and 2008) Ensemble of ten WRF-ARW members with 4 km grid spacing run by Center for Analysis and Prediction of Storms (CAPS) 30 hours per case (five 6-hour time periods); 00Z Present study uses a subdomain of 2007/2008 Coarsened onto 20 km grid spacing

1980 km x 1840 km rather than 3000 km x 2500 km (2007) Subdomain of Present Study

Methodology Creation of 2D POP tables – Forecasted precipitation amount within a bin Maximum or average amount – Number of ensemble members forecasting agreement on precipitation amounts above a threshold

Methodology continued Seven precipitation bins POPs assigned through hit rates NCEP Stage IV observations designated hits Three thresholds: 0.01, 0.10, and 0.25 inch -h is the number of “hits”, or points where the observed precipitation also exceeded the specified threshold -f is the number of grid points with precipitation forecasted for a given bin/member scenario

Approach #1 Two-parameter point forecast approach

< >1.0 Col Ave (Cali_trad) 0% % % % % % % % % % % Row Ave

Max_thr POPs for April 23, Z – 12Z

Cali_trad POPs for April 23, Z – 12Z

Max_thr – Cali_trad

Score MethodBSReliResolUncertBSSBias GSD 0.01 inch inch inch Uncali_trad 0.01 inch inch inch Cali_trad 0.01 inch inch inch Max_thr 0.01 inch inch inch

Approach #2 Two-parameter neighborhood approach

Neighborhoods: Theis et al. (2005), Ebert (2009) Within a specified square area around a center point, the max or ave precip. amount is determined and binned Number of points within the neighborhood that have forecast precip. amounts greater than a threshold Spatially generated ensemble Forecasts for each member x3 Neighborhood

MemberScore BSReliResolUncertBSSBiasROC area 0.01 inch Mem Mem Mem Mem Mem Mem Mem Mem Mem Mem Statistics for Ave_nbh (15x15 g.p.)

Scatterplot of Brier scores (0.01 inch threshold)

Scatterplot of Brier scores (0.10 and 0.25 inch thresholds)

Approach #3 Combination of methods

Considers each method as an ensemble member that itself consists of ensemble members Uses the different POP tables to determine POPs for each method, then averages POPs Different trends in POP fields Many variations of the approach

Score ThresholdBSReliResolUncertBSSBiasROC area 3x inch inch inch x inch inch inch P-value (compared to Cali_trad): (90% C.I.)

Combination approach

Max_thr

Conclusions Two-parameter point forecast approach Improvements over Cali_trad, which encouraged the development of other approaches Two-parameter neighborhood approach Deterministic, but comparable to Cali_trad Improvements due to spatial ensembles Increased neighborhood size led to better Brier scores Combination approach Brings several methods/approaches together by averaging POPs Statistically significantly different Brier scores compared to Cali_trad at 90% C.I.

This research was funded in part by National Science Foundation grants ATM and ATM , with funds from the American Recovery and Reinvestment Act of