Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

KMA will extend medium Range forecast from 7day to 10 day on Oct A post processing technique, Ensemble Model Output Statistics (EMOS), was developed.
Mesoscale Probabilistic Prediction over the Northwest: An Overview Cliff Mass Adrian Raftery, Susan Joslyn, Tilmann Gneiting and others University of Washington.
“Where America’s Climate, Weather and Ocean Services Begin” NCEP CONDUIT UPDATE Brent A Gordon NCEP Central Operations January 31, 2006.
GFS MOS Wind Guidance: Problem Solved? Eric Engle and Kathryn Gilbert MDL/Statistical Modeling Branch 15 May 2012.
452 Precipitation. Northwest Weather = Terrain + Ocean Influence.
Introduction to Weather Forecasting Cliff Mass Department of Atmospheric Sciences University of Washington.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Hydrometeorological Prediction Center HPC Medium Range Grid Improvements Mike Schichtel, Chris Bailey, Keith Brill, and David Novak.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
MOS What does acronym stand for ? –MODEL OUTPUT STATISTICS What is the difference between the GFS and GFS MOS ?
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
1 Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback Judy E. Ghirardelli National.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Lecture 10 (11/11) Numerical Models. Numerical Weather Prediction Numerical Weather Prediction (NWP) uses the power of computers and equations to make.
Forecasting and Numerical Weather Prediction (NWP) NOWcasting Description of atmospheric models Specific Models Types of variables and how to determine.
National Weather Service Model Flip-Flops and Forecast Opportunities Bernard N. Meisner Scientific Services Division NWS Southern Region Fort Worth, Texas.
Toward a 4D Cube of the Atmosphere via Data Assimilation Kelvin Droegemeier University of Oklahoma 13 August 2009.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
Intraseasonal TC prediction in the southern hemisphere Matthew Wheeler and John McBride Centre for Australia Weather and Climate Research A partnership.
NOAA’s National Weather Service National Digital Forecast Database: Status Update LeRoy Spayd Chief, Meteorological Services Division Unidata Policy Committee.
MDSS Lab Prototype: Program Update and Highlights Bill Mahoney National Center For Atmospheric Research (NCAR) MDSS Stakeholder Meeting Boulder, CO 20.
Utilizing Localized Aviation MOS Program (LAMP) for Improved Forecasting Judy E. Ghirardelli National Weather Service Meteorological Development Laboratory.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
Celeste Saulo and Juan Ruiz CIMA (CONICET/UBA) – DCAO (FCEN –UBA)
OUTLINE Current state of Ensemble MOS
Verification of IRI Forecasts Tony Barnston and Shuhua Li.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
MDL Lightning-Based Products Kathryn Hughes NOAA/NWS/OST December 3, 2003
Course Evaluation Closes June 8th.
CC Hennon ATMS 350 UNC Asheville Model Output Statistics Transforming model output into useful forecast parameters.
APPLICATION OF NUMERICAL MODELS IN THE FORECAST PROCESS - FROM NATIONAL CENTERS TO THE LOCAL WFO David W. Reynolds National Weather Service WFO San Francisco.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
Briefing by: Roque Vinicio Céspedes Finally Here! The MOST awaited briefing ever! February 16, 2011.
OPERATIONAL 2-H THUNDERSTORM GUIDANCE FCSTS TO 24 HRS ON 20-KM GRID JESS CHARBA FRED SAMPLATSKY METEOROLOGICAL DEVELOPMENT LABORATORY OST / NWS / NOAA.
The LAMP/HRRR MELD FOR AVIATION FORECASTING Bob Glahn, Judy Ghirardelli, Jung-Sun Im, Adam Schnapp, Gordana Rancic, and Chenjie Huang Meteorological Development.
An Examination Of Interesting Properties Regarding A Physics Ensemble 2012 WRF Users’ Workshop Nick P. Bassill June 28 th, 2012.
Localized Aviation MOS Program (LAMP) Judy E. Ghirardelli National Weather Service Meteorological Development Laboratory February 05, 2009.
Overview of WG5 activities and Conditional Verification Project Adriano Raspanti - WG5 Bucharest, September 2006.
MOS and Evolving NWP Models Developer’s Dilemma: Frequent changes to NWP models… Make need for reliable statistical guidance more critical Helps forecasters.
MDL Requirements for RUA Judy Ghirardelli, David Myrick, and Bruce Veenhuis Contributions from: David Ruth and Matt Peroutka 1.
Using Ensemble Model Output Statistics to Improve 12-Hour Probability of Precipitation Forecasts John P. Gagan NWS Springfield, MO Chad Entremont NWS Jackson,
2008 AT540 Forecast Contest! Compete against your classmates and TA for bragging rights and a chance to win extra points on your final lab grade! Apply.
Aviation Products from the Localized Aviation MOS Program (LAMP) Judy E. Ghirardelli National Weather Service Meteorological Development Laboratory Presented.
Translating Advances in Numerical Weather Prediction into Official NWS Forecasts David P. Ruth Meteorological Development Laboratory Symposium on the 50.
Developing GFS-based MOS Thunderstorm Guidance for Alaska Phillip E. Shafer* and Kathryn Gilbert Meteorological Development Laboratory, NWS, NOAA
National Oceanic and Atmospheric Administration’s National Weather Service Colorado Basin River Forecast Center Salt Lake City, Utah 11 The Hydrologic.
Encast Global forecasting.
A Guide to Tropical Cyclone Guidance
Update on the Northwest Regional Modeling System 2013
Overview of Deterministic Computer Models
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
Bo Christiansen Downward propagation from the stratosphere:
Model Post Processing.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Post Processing.
452 Precipitation.
Ensemble-4DWX update: focus on calibration and verification
MOS What does acronym stand for ?
Atmospheric Sciences 452 Spring 2019
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Short Range Ensemble Prediction System Verification over Greece
Presentation transcript:

Model Post Processing

Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from deterministic information Can provide forecasts for parameters that the model incapable of modeling successfully due to resolution or physics issues (e.g., shallow fog)

Post Processing Model Output Statistics was the first post- processing method used by the NWS (1969) Based on multiple linear regression. Essentially unchanged in 40 years. Does not consider non-linear relationships between predictors and predictands. Does take out much of systematic bias.

MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at: p

Global Ensemble MOS Ensemble MOS forecasts are based on the 0000 UTC run of the GFS Global model ensemble system. These runs include the operational GFS, a control version of the GFS (run at lower resolution), and 20 additional runs. Older operational GFS MOS prediction equations are applied to the output from each of the ensemble runs to produce 21 separate sets of alphanumeric bulletins in the same format as the operational MEX message.

Gridded MOS The NWS needs MOS on a grid for many reasons, including for use in their IFPS analysis/forecasting system. The problem is that MOS is only available at station locations. To deal with this, NWS created Gridded MOS. Takes MOS at individual stations and spreads it out based on proximity and height differences. Also does a topogaphic correction dependent on reasonable lapse rate.

Current “Operational” Gridded MOS

Localized Aviation MOS Program (LAMP) Hourly updated statistical product Like MOS but combines: –MOS guidance –the most recent surface observations –simple local models run hourly –GFS output

Practical Example of Solving a LAMP Temperature Equation Y = LAMP temperature forecast Equation Constant b = Predictor x 1 = observed temperature at cycle issuance time (value 66.0) Predictor x 2 = observed dewpoint at cycle issuance time (value 58.0) Predictor x 3 = GFS MOS temperature (value 64.4) Predictor x 4 = GFS MOS dewpoint (value 53.0) Y = b + a 1 x 1 + a 2 x 2 + a 3 x 3 + a 4 x 4

Theoretical Model Forecast Performance of LAMP, MOS, and Persistence LAMP outperforms persistence for all projections and handily outperforms MOS in the 1-12 hour projections. The skill level of LAMP forecasts begin to converge to the MOS skill level after the 12 hour projection and become almost indistinguishable by the 20 hour projection. The decreased predictive value of the observations at the later projections causes the LAMP skill level to diminish and converge to the skill level of MOS forecasts.

Verification of LAMP 2-m Temperature Forecasts

MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between human and MOS forecasts.

Cool Season Mi. Temp – 12 UTC Cycle Average Over 80 US stations

Prob. Of Precip.– Cool Season (0000/1200 UTC Cycles Combined)

MOS Won the Department Forecast Contest in 2003 For the First Time!

Average or Composite MOS There has been some evidence that an average or consensus MOS is even more skillful than individual MOS output. Vislocky and Fritsch (1997), using data, found that an average of two or more MOS’s (CMOS) outperformed individual MOS’s and many human forecasters in a forecasting competition.

Some Questions How does the current MOS performance…driven by far superior models… compare with NWS forecasters around the country. How skillful is a composite MOS, particularly if one weights the members by past performance? How does relative human/MOS performance vary by forecast projection, region, large one-day variation, or when conditions vary greatly from climatology? Considering the results, what should be the role of human forecasters?

This Study August – August (12 months). 29 stations, all at major NWS Weather Forecast Office (WFO) sites. Evaluated MOS predictions of maximum and minimum temperature, and probability of precipitation (POP).

National Weather Service locations used in the study.

Forecasts Evaluated NWS Forecast by real, live humans EMOS: Eta MOS NMOS: NGM MOS GMOS: GFS MOS CMOS: Average of the above three MOSs WMOS: Weighted MOS, each member is weighted by its performance during a previous training period (ranging from days, depending on each station). CMOS-GE: A simple average of the two best MOS forecasts: GMOS and EMOS

The Approach: Give the NWS the Advantage! 08-10Z-issued forecast from NWS matched against previous 00Z forecast from models/MOS. –NWS has 00Z model data available, and has added advantage of watching conditions develop since 00Z. –Models of course can’t look at NWS, but NWS looks at models. NWS Forecasts going out 48 (model out 60) hours, so in the analysis there are: –Two maximum temperatures (MAX-T), –Two minimum temperatures (MIN-T), and –Four 12-hr POP forecasts.

Temperature Comparisons

MAE (  F) for the seven forecast types for all stations, all time periods, 1 August 2003 – 1 August Temperature

Precipitation Comparisons

Brier Scores for Precipitation for all stations for the entire study period.

Brier Score for all stations, 1 August 2003 – 1 August day smoothing is performed on the data.

Brier Score for all stations, 1 August 2003 – 1 August 2004, sorted by geographic region. Precipitation

There are many other post- processing approaches Neural nets Attempts to duplicate the complex interactions between neurons in the human brain.

Dynamic MOS Using Multiple Models MOS equations are updated frequently, not static like the NWS. Example: DiCast used by the Weather Channel

ForecastAdvisor.com

They don’t MOS!

UW Bias Correction of WRF

And many others…