MOS and Evolving NWP Models Developer’s Dilemma: Frequent changes to NWP models… Make need for reliable statistical guidance more critical Helps forecasters.

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

Mei Xu, Jamie Wolff and Michelle Harrold National Center for Atmospheric Research (NCAR) Research Applications Laboratory (RAL) and Developmental Testbed.
A Brief Guide to MDL's SREF Winter Guidance (SWinG) Version 1.0 January 2013.
KMA will extend medium Range forecast from 7day to 10 day on Oct A post processing technique, Ensemble Model Output Statistics (EMOS), was developed.
Statistical post-processing using reforecasts to improve medium- range renewable energy forecasts Tom Hamill and Jeff Whitaker NOAA Earth System Research.
3.11 Adaptive Data Assimilation to Include Spatially Variable Observation Error Statistics Rod Frehlich University of Colorado, Boulder and RAL/NCAR Funded.
NOAA/NWS Change to WRF 13 June What’s Happening? WRF replaces the eta as the NAM –NAM is the North American Mesoscale “timeslot” or “Model Run”
CHE 185 – PROCESS CONTROL AND DYNAMICS
GFS MOS Wind Guidance: Problem Solved? Eric Engle and Kathryn Gilbert MDL/Statistical Modeling Branch 15 May 2012.
Instituting Reforecasting at NCEP/EMC Tom Hamill (ESRL) Yuejian Zhu (EMC) Tom Workoff (WPC) Kathryn Gilbert (MDL) Mike Charles (CPC) Hank Herr (OHD) Trevor.
For the Lesson: Eta Characteristics, Biases, and Usage December 1998 ETA-32 MODEL CHARACTERISTICS.
Recent performance statistics for AMPS real-time forecasts Kevin W. Manning – National Center for Atmospheric Research NCAR Earth System Laboratory Mesoscale.
Statistical Weather Forecasting Independent Study Daria Kluver From Statistical Methods in the Atmospheric Sciences by Daniel Wilks.
Performance Characteristics of a Pseudo-operational Ensemble Kalman Filter April 2006, EnKF Wildflower Meeting Greg Hakim & Ryan Torn University of Washington.
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
The NCEP operational Climate Forecast System : configuration, products, and plan for the future Hua-Lu Pan Environmental Modeling Center NCEP.
Hydrometeorological Prediction Center HPC Medium Range Grid Improvements Mike Schichtel, Chris Bailey, Keith Brill, and David Novak.
Brian Ancell, Cliff Mass, Gregory J. Hakim University of Washington
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
MOS What does acronym stand for ? –MODEL OUTPUT STATISTICS What is the difference between the GFS and GFS MOS ?
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Sampling Methods.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Business Forecasting Chapter 5 Forecasting with Smoothing Techniques.
1 Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback Judy E. Ghirardelli National.
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
1 Ensemble Reforecasts Presented By: Yuejian Zhu (NWS/NCEP) Contributors:
Oceanography 569 Oceanographic Data Analysis Laboratory Kathie Kelly Applied Physics Laboratory 515 Ben Hall IR Bldg class web site: faculty.washington.edu/kellyapl/classes/ocean569_.
Forecasting and Numerical Weather Prediction (NWP) NOWcasting Description of atmospheric models Specific Models Types of variables and how to determine.
Verification of a Blowing Snow Model and Applications for Blizzard Forecasting Jeff Makowski, Thomas Grafenauer, Dave Kellenbenz, Greg Gust National Weather.
ESA DA Projects Progress Meeting 2University of Reading Advanced Data Assimilation Methods WP2.1 Perform (ensemble) experiments to quantify model errors.
Preliminary Results of Global Climate Simulations With a High- Resolution Atmospheric Model P. B. Duffy, B. Govindasamy, J. Milovich, K. Taylor, S. Thompson,
On Improving GFS Forecast Skills in the Southern Hemisphere: Ideas and Preliminary Results Fanglin Yang Andrew Collard, Russ Treadon, John Derber NCEP-EMC.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
Météo-France / CNRM – T. Bergot 1) Introduction 2) The methodology of the inter-comparison 3) Phase 1 : cases study Inter-comparison of numerical models.
OUTLINE Current state of Ensemble MOS
1 Agenda Topic: National Blend Presented By: Kathryn Gilbert (NWS/NCEP) Team Leads: Dave Myrick, David Ruth (NWS/OSTI/MDL), Dave Novak (NCEP/WPC), Jeff.
1 Using reforecasts for probabilistic forecast calibration Tom Hamill & Jeff Whitaker NOAA Earth System Research Lab, Boulder, CO NOAA.
Modification of GFS Land Surface Model Parameters to Mitigate the Near- Surface Cold and Wet Bias in the Midwest CONUS: Analysis of Parallel Test Results.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
1 An overview of the use of reforecasts for improving probabilistic weather forecasts Tom Hamill NOAA / ESRL, Physical Sciences Div.
Potential Benefits of Multiple-Doppler Radar Data to Quantitative Precipitation Forecasting: Assimilation of Simulated Data Using WRF-3DVAR System Soichiro.
Development of an EnKF to estimate CO 2 fluxes from realistic distributions of X CO2 Liang Feng, Paul Palmer
Sensitivity experiments with the Runge Kutta time integration scheme Lucio TORRISI CNMCA – Pratica di Mare (Rome)
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
Hurricane Forecast Improvement Project (HFIP): Where do we stand after 3 years? Bob Gall – HFIP Development Manager Fred Toepfer—HFIP Project manager Frank.
A JHT FUNDED PROJECT GFDL PERFORMANCE AND TRANSITION TO HWRF Morris Bender, Timothy Marchok (GFDL) Isaac Ginis, Biju Thomas (URI)
P.1 QPF verif scores for NCEP and International Models ● 2013 ETS/bias scores for 00-24h and 24-48h forecasts (the two forecast ranges that all datasets.
Transitioning unique NASA data and research technologies to the NWS AIRS Profile Assimilation - Case Study results Shih-Hung Chou, Brad Zavodsky Gary Jedlovec,
Working group III report Post-processing. Members participating Arturo Quintanar, John Pace, John Gotway, Huiling Yuan, Paul Schultz, Evan Kuchera, Barb.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
OPERATIONAL 2-H THUNDERSTORM GUIDANCE FCSTS TO 24 HRS ON 20-KM GRID JESS CHARBA FRED SAMPLATSKY METEOROLOGICAL DEVELOPMENT LABORATORY OST / NWS / NOAA.
Modeling and Evaluation of Antarctic Boundary Layer
Improved Statistical Intensity Forecast Models: A Joint Hurricane Testbed Year 2 Project Update Mark DeMaria, NOAA/NESDIS, Fort Collins, CO John A. Knaff,
NWS Digital Services 1 CB Operations Committee Lynn Maximuk DSPO Operations Team Eastern Region HPC Day 4-7 Grid Proposal Review, Findings and Recommendations.
Bias Correction of RTFDDA Surface Forecasts Presented by: Daran Rife National Center for Atmospheric Research Research Applications Laboratory Boulder,
MDL Requirements for RUA Judy Ghirardelli, David Myrick, and Bruce Veenhuis Contributions from: David Ruth and Matt Peroutka 1.
Météo-France / CNRM – T. Bergot 1) Methodology 2) The assimilation procedures at local scale 3) Results for the winter season Improved Site-Specific.
Statistical Evaluation of High-resolution WRF Model Forecasts Near the SF Bay Peninsula By Ellen METR 702 Prof. Leonard Sklar Fall 2014 Research Advisor:
Station lists and bias corrections Jemma Davie, Colin Parrett, Richard Renshaw, Peter Jermey © Crown Copyright 2012 Source: Met Office© Crown copyright.
of Temperature in the San Francisco Bay Area
of Temperature in the San Francisco Bay Area
Model Post Processing.
Post Processing.
MOS What does acronym stand for ?
REGIONAL AND LOCAL-SCALE EVALUATION OF 2002 MM5 METEOROLOGICAL FIELDS FOR VARIOUS AIR QUALITY MODELING APPLICATIONS Pat Dolwick*, U.S. EPA, RTP, NC, USA.
Atmospheric Sciences 452 Spring 2019
Presentation transcript:

MOS and Evolving NWP Models Developer’s Dilemma: Frequent changes to NWP models… Make need for reliable statistical guidance more critical Helps forecasters adapt to changing model performance Can help remove systematic bias, “downscale” output Forecasters need updated guidance with model changes Make development of reliable statistical guidance more difficult Operational statistical systems can be compromised Long, stable dependent samples not available MODELS OBS MOS

Parallel evaluation Run MOS…new vs. old NWP model Assess impacts on MOS skill Do nothing? OK if impacts are minimal But, often they aren’t! (GFS wind / temps) OK, now what? Bias characteristics significantly different Undesirable effects on MOS performance Limited sample available from newest version Responding to NWP Model Changes

GFS: Hybrid EnKF parallel evaluation – Cool Season

GFS MOS Wind Bias May-Jul 2011 Jan-Apr 2011

MOS Parallel Evaluations Single-station MOS elements most affected by changes in model bias, i.e. temperatures, winds Effects vary regionally, seasonally Bias correction evaluated, some improvement in temperatures, not sufficient for winds Heavy use of boundary layer model predictors in MOS Regionalized elements generally show less degradation PoP, QPF, Clouds, Tstm, Ceiling, etc. Regionalized equations less sensitive to model changes Not all model changes degrade the MOS guidance Improved data assimilation, improved timing, reduction of random errors MOS parallel evaluations have identified bias problems in upgraded models Lessons Learned

1. Consistent archive grid Smoothing of fine-scale detail, constant mesh length for grid sensitive calculations 2. Enlarged geographic regions Larger data pools help to stabilize equations, less sensitive to boundary layer 3. Use of “robust” predictor variables Fewer boundary layer variables, offer variables less sensitive known model changes 4. Multivariate MOS equations Can compensate for sudden changes in a single predictor 5. Mixed samples Increase sample size by blending latest version of model with previous model version output Mitigating the effects on development To help reduce the impact of model changes and small sample size, we rely upon...

GFS MOS Wind Verification Results* 5/10/2011 – 9/30/2011 Using even just a little data from the new NWP model version can be helpful! 0.00 * 2-season dependent sample (4/2010 – 9/2011) Windspeed - Southwest CONUS MAE Bias

MDL Summary Retrospective Runs or Reforecasts Minimize time lag between significant model changes and availability of quality guidance NMM and GFS development work demonstrated usefulness of 2-3 year sample of “new” model mixed with previous version for temperatures and winds Has to be representative – across all seasons, synoptic regimes Allows calibrated guidance to take advantage of model improvements Representativeness improved by reruns every Nth day (MDL would defer to others for ideal N) but MUST be run on the operational model configuration National Model Blender Project will face challenges gathering samples sufficient for development