Atmospheric Sciences 452 Spring 2019

Slides:



Advertisements
Similar presentations
Chapter 13 – Weather Analysis and Forecasting
Advertisements

A Brief Guide to MDL's SREF Winter Guidance (SWinG) Version 1.0 January 2013.
KMA will extend medium Range forecast from 7day to 10 day on Oct A post processing technique, Ensemble Model Output Statistics (EMOS), was developed.
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
GFS MOS Wind Guidance: Problem Solved? Eric Engle and Kathryn Gilbert MDL/Statistical Modeling Branch 15 May 2012.
2012: Hurricane Sandy 125 dead, 60+ billion dollars damage.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Hydrometeorological Prediction Center HPC Medium Range Grid Improvements Mike Schichtel, Chris Bailey, Keith Brill, and David Novak.
MOS What does acronym stand for ? –MODEL OUTPUT STATISTICS What is the difference between the GFS and GFS MOS ?
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
A Regression Model for Ensemble Forecasts David Unger Climate Prediction Center.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Weather Forecasting - II. Review The forecasting of weather by high-speed computers is known as numerical weather prediction. Mathematical models that.
“1995 Sunrise Fire – Long Island” Using an Ensemble Kalman Filter to Explore Model Performance on Northeast U.S. Fire Weather Days Michael Erickson and.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Lecture 10 (11/11) Numerical Models. Numerical Weather Prediction Numerical Weather Prediction (NWP) uses the power of computers and equations to make.
Oceanography 569 Oceanographic Data Analysis Laboratory Kathie Kelly Applied Physics Laboratory 515 Ben Hall IR Bldg class web site: faculty.washington.edu/kellyapl/classes/ocean569_.
Forecasting and Numerical Weather Prediction (NWP) NOWcasting Description of atmospheric models Specific Models Types of variables and how to determine.
Toward a 4D Cube of the Atmosphere via Data Assimilation Kelvin Droegemeier University of Oklahoma 13 August 2009.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
Downscaling and its limitation on climate change impact assessments Sepo Hachigonta University of Cape Town South Africa “Building Food Security in the.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
OUTLINE Current state of Ensemble MOS
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
CC Hennon ATMS 350 UNC Asheville Model Output Statistics Transforming model output into useful forecast parameters.
Statistical Post Processing - Using Reforecast to Improve GEFS Forecast Yuejian Zhu Hong Guan and Bo Cui ECM/NCEP/NWS Dec. 3 rd 2013 Acknowledgements:
Production of a multi-model, convective- scale superensemble over western Europe as part of the SESAR project EMS Annual Conference, Sept. 13 th, 2013.
The LAMP/HRRR MELD FOR AVIATION FORECASTING Bob Glahn, Judy Ghirardelli, Jung-Sun Im, Adam Schnapp, Gordana Rancic, and Chenjie Huang Meteorological Development.
MOS AVN = Dynamical Model –Seven fundamental equations ! AVN MOS = Statistical Model –No seven fundamental equations ! –Equations are statistical, not.
MDL Requirements for RUA Judy Ghirardelli, David Myrick, and Bruce Veenhuis Contributions from: David Ruth and Matt Peroutka 1.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
NOAA Northeast Regional Climate Center Dr. Lee Tryhorn NOAA Climate Literacy Workshop April 2010 NOAA Northeast Regional Climate.
Developing GFS-based MOS Thunderstorm Guidance for Alaska Phillip E. Shafer* and Kathryn Gilbert Meteorological Development Laboratory, NWS, NOAA
National Oceanic and Atmospheric Administration’s National Weather Service Colorado Basin River Forecast Center Salt Lake City, Utah 11 The Hydrologic.
Daiwen Kang 1, Rohit Mathur 2, S. Trivikrama Rao 2 1 Science and Technology Corporation 2 Atmospheric Sciences Modeling Division ARL/NOAA NERL/U.S. EPA.
Chapter 13 – Weather Analysis and Forecasting. The National Weather Service The National Weather Service (NWS) is responsible for forecasts several times.
Encast Global forecasting.
Chapter 7. Classification and Prediction
A Guide to Tropical Cyclone Guidance
IBIS Weather generator
Overview of Downscaling
Update on the Northwest Regional Modeling System 2013
Course Evaluation Now online You should have gotten an with link.
University of Washington Ensemble Systems for Probabilistic Analysis and Forecasting Cliff Mass, Atmospheric Sciences University of Washington.
Overview of Deterministic Computer Models
Course Evaluation Now online You should have gotten an with link.
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
Question 1 Given that the globe is warming, why does the DJF outlook favor below-average temperatures in the southeastern U. S.? Climate variability on.
Model Post Processing.
Basic Forecasting Tips
S.Alessandrini, S.Sperati, G.Decimi,
Update on the Northwest Regional Modeling System 2017
Course Evaluation Now online You should have gotten an with link.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Post Processing.
Post Processing.
452 Precipitation.
The Importance of Reforecasts at CPC
WMO NWP Wokshop: Blending Breakout
Predicting Frost Using Artificial Neural Network
Artificial Intelligence Lecture No. 28
The Technology and Future of Weather Forecasting ATMS 490
MOS What does acronym stand for ?
Design of Experiments CHM 585 Chapter 15.
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Presentation transcript:

Atmospheric Sciences 452 Spring 2019 Model Post Processing Atmospheric Sciences 452 Spring 2019

Model Output Can Usually Be Improved with Statistical Post Processing Can remove systematic bias Can produce probabilistic information from deterministic information and historical performance. Can provide forecasts for parameters that a model is incapable of simulating successfully due to resolution or physics issues (e.g., shallow fog)

Model Post-Processing There are a variety of approaches: Simple bias removal Model output statistics (MOS) Machine learning and artificial intelligence (AI) Bayesian model averaging Neutral nets And more….

Model Output Statistics (MOS) Model Output Statistics was the first post-processing method used by the NWS (1969) Based on multiple linear regression. Essentially unchanged over past 40 years. Does not consider non-linear relationships between predictors and predictands. Does take out much of systematic bias. Does improve forecast!

Based on Multiple Linear Regression Y=a0 + a1 X1+ a2X2 + … Y is the predictand—what we want to predict Xi are the predictors….can be model output or observations ai are coefficients

Multiple Linear Regression in MOS Select Xi and calculate ai using BOTH model and observational data. So Xi and be either model output or observations. Advantage: can adjust for model biases (e.g., too warm) Disadvantage: Needs several years of model runs to derive equations. If model changes significantly, have to do it again.

How do we select Xi? Use screening regression Start with at least two years of model output and observations. Go through a long laundry list of potential predictors Select one that has the highest correlation with the predictand. Then select a second predictor that has produces the most increase in correlation in addition to the first predictor. Keep on going, typically until 12 predictors are found.

Correlation Coefficient (r) Between Two Time series 0—no correlation 1—perfect correlation R2 = percent of explained variance So r = .5, 25% of the variance or variability of the second time series can be explained by the variability in the first.

Why does the NWS stop at 12 predictors? Too many predictors can be detrimental! Adds little improvement but adds to computational expense Can overfit the data/ Why? Because one can fit small, random, or unique events. Most of the benefit are with the first 3-4 predictors. Generally, takes at least a two year sample of forecasts to have a sufficient data base.

Examples!

Day 2 (30-h) GFS MOS Max Temp Equation for KSLC (Cool Season – 0000 UTC cycle) Predictor (XN) Coeff. (aN) -467.7800 1 2-m Temperature (21-h proj.) 1.7873 2 2-m Dewpoint (21-h proj.) 0.1442 3 2-m Dewpoint (12-h proj.) -0.2060 4 2-m Dewpoint (27-h proj.) 0.1252 5 Observed Temperature (03Z) 0.0354 6 850-mb Vertical Velocity (21-h proj.) 19.5759 7 925-mb Wind Speed (15-h proj.) 0.6024 8 Sine Day of Year 1.7111 9 700-mb Wind Speed (15-h proj.) 0.2701 10 Sine 2*DOY 1.5110

Day 2 (42-h) GFS MOS Max Temp Equation for KUNV (Warm Season – 1200 UTC cycle) Predictor (XN) Coeff. (aN) -432.2469 1 2-m Temperature (33-h proj.) 0.9249 2 2-m Dewpoint (33-h proj.) 0.5751 3 950-mb Dewpoint (24-h proj.) 0.4026 4 950-mb Rel. Humidity (27-h proj.) -0.1784 5 850-mb Dewpoint (39-h proj.) -0.2439 6 Observed Dewpoint (15Z) -0.1000 7 Observed Temperature (15Z) 0.1270 8 1000-mb Rel. Humidity (24-h proj.) 0.0027 9 Sine Day of Year 0.9763 10 500-1000mb Thickness (45-h proj.) 0.0057

Day 2 (30-h) GFS MOS Min Temp Equation for KDCA (Cool Season - 1200 UTC cycle) Predictor (XN) Coeff. (aN) -374.9980 1 2-m Temperature (21-h proj.) 0.9700 2 1000-mb Temperature (12-h proj.) 0.3245 3 2-m Dewpoint (27-h proj.) 0.1858 4 2-m Relative Humidity (27-h proj.) -0.0037 5 2-m Relative Humidity (15-h proj.) -0.0380 6 975-mb Wind Speed (21-h proj.) -0.0653 7 Observed Temperature (15Z) 0.1584 8 Sine Day of Year -0.0342 Develop / Evaluate

There are HUNDREDS of THOUSANDS of MOS equations Separate equations for each of thousands of locations (see more on this later) Different equations for each variable Different equations for each season, initialization time (e.g., 0 or 12 UTC) Different equation for each projection (e.g., 30 h forecasts)

A Very Important Point For some (most) parameters, a different equation for each station. Parameters in which there samples each hour (e.g., temperature, dew point) Such equations can take in consideration local biases, local weather features, etc. Thus, temperature MOS knows about the Puget Sound convergence zone even if the model doesn’t.

A Very Important Point For other (most) parameters, the same equation for nearby station. Parameters in which there FEW samples each hour (e.g., precipitation, severe weather) Such equations DO NOT take in consideration local biases, local weather features, etc. Thus, these MOS equations do no add any information on local weather features

MOS Characteristics Tends to go toward climatology at longer periods Does poorly for transient (short-term) model failures From poor synoptic forecast Unusual biases Can go for extreme events, but often misses them Only some MOS parameters can add local impacts.

MOS Comments Good for “garden variety” events. Very hard for humans to beat—but it is possible for some situations (shallow cold air) Average of several MOS forecasts (e.g., NAM and GFS), better than single MOS MOS reduces or removes long-term, systematic biases. Does little for rare or transient biases.

More MOS Available for several modeling systems: NAM MOS GFS MOS GEFS MOS

MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at: http://www.nws.noaa.gov/mdl/synop/index.php

Global Ensemble MOS Ensemble MOS forecasts are based on the 0000 UTC run of the GFS Global model ensemble system. These runs include the operational GFS, a control version of the GFS (run at lower resolution), and 20 additional runs. Older operational GFS MOS prediction equations are applied to the output from each of the ensemble runs to produce 21 separate sets of alphanumeric bulletins in the same format as the operational MEX message.

Gridded MOS The NWS needs MOS on a grid for many reasons, including for use in their IFPS analysis/forecasting system. The problem is that MOS is only available at station locations. To deal with this, NWS created Gridded MOS. Takes MOS at individual stations and spreads it out based on proximity and height differences. Also does a topogaphic correction dependent on reasonable lapse rate.

Current “Operational” Gridded MOS

Localized Aviation MOS Program (LAMP) Hourly updated statistical product Like MOS but combines: MOS guidance the most recent surface observations simple local models run hourly GFS output

Practical Example of Solving a LAMP Temperature Equation Y = b + a1x1 + a2x2 + a3x3 + a4x4 Y = LAMP temperature forecast Equation Constant b = -6.99456 Predictor x1 = observed temperature at cycle issuance time (value 66.0) Predictor x2 = observed dew point at cycle issuance time (value 58.0) Predictor x3 = GFS MOS temperature (value 64.4) Predictor x4 = GFS MOS dew point (value 53.0)

Theoretical Model Forecast Performance of LAMP, MOS, and Persistence

Verification of LAMP 2-m Temperature Forecasts

MOS Performance Versus Humans MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between human and MOS forecasts.

Cool Season Mi. Temp – 12 UTC Cycle Average Over 80 US stations

MOS Won the Department Forecast Contest in 2003 For the First Time!

UW MOS Study August 1 2003 – August 1 2004 (12 months). 29 stations, all at major NWS Weather Forecast Office (WFO) sites. Evaluated MOS predictions of maximum and minimum temperature, and probability of precipitation (POP).

National Weather Service locations used in the study.

Forecasts Evaluated NWS Forecast by real, live humans EMOS: NAM MOS NMOS: NGM MOS GMOS: GFS MOS CMOS: Average of the above three MOSs WMOS: Weighted MOS, each member is weighted by its performance during a previous training period (ranging from 10-30 days, depending on each station). CMOS-GE: A simple average of the two best MOS forecasts: GMOS and EMOS

The Approach: Give the NWS the Advantage! 08-10Z-issued forecast from NWS matched against previous 00Z forecast from models/MOS. NWS has 00Z model data available, and has added advantage of watching conditions develop since 00Z. Models of course can’t look at NWS, but NWS looks at models. NWS Forecasts going out 48 (model out 60) hours, so in the analysis there are: Two maximum temperatures (MAX-T), Two minimum temperatures (MIN-T), and Four 12-hr POP forecasts.

Temperature Comparisons

Temperature MAE (F) for the seven forecast types for all stations, all time periods, 1 August 2003 – 1 August 2004.

Precipitation Comparisons

Brier Scores for Precipitation for all stations for the entire study period.

New NWS Approach: National Blend of Models The National Blend of Models (NBM) is a nationally consistent and skillful suite of calibrated forecast guidance based on a blend of both NWS and non-NWS numerical weather prediction model data and post-processed model guidance. The goal of the NBM is to create a highly accurate, skillful and consistent starting point for the gridded forecast.

Blend Combines statistically a collection of models 2.5 km grid website

National Blend of Models Will be the starting point of NWS gridded forecasts Save forecasters time Reduce issues between boundaries of different offices.

The Private Sector Has Gone Beyond MOS to Superior Post Processing

They don’t do traditional MOS!

Dynamic MOS Using Multiple Models MOS equations are updated frequently, not static like the NWS. Multiple model and observation inputs Example: DiCast used by the Weather Channel and Accuweather, Developed by NCAR

Dynamical MOS of MOSs

DICAST skill is quite good

ForecastAdvisor.com

https://www.forecastadvisor.com/

Better than NWS MOS

Bayesian Model Averaging (BMA) A good way to optimize the use of ensembles of forecasts to provide calibrated probabilistic guidance. Can weight models and the variability by previous performance.

Machine Learning/Artificial Intelligence

Machine learning algorithms use computational methods to “learn” information directly from data without relying on a predetermined equation The algorithms adaptively improve their performance as the number of samples available for learning increases. Machine learning is a field of computer science that uses statistical techniques to give computer systems the ability to "learn" (e.g., progressively improve performance on a specific task) with data, without being explicitly programmed.

Two Types of Machine Learning

Decision Trees are Very Popular Particularly Random Forest

Neural nets Attempts to duplicate the complex interactions between neurons in the human brain.