MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at: http://www.nws.noaa.gov/mdl/synop/index.php.

Slides:



Advertisements
Similar presentations
KMA will extend medium Range forecast from 7day to 10 day on Oct A post processing technique, Ensemble Model Output Statistics (EMOS), was developed.
Advertisements

Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
452 Precipitation. Northwest Weather = Terrain + Ocean Influence.
Temperature Prediction. ASOS Temperature/Humidity Senor.
PERFORMANCE OF NATIONAL WEATHER SERVICE FORECASTS VERSUS MODEL OUTPUT STATISTICS Jeff Baars Cliff Mass Mark Albright University of Washington, Seattle,
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at:
Hydrometeorological Prediction Center HPC Medium Range Grid Improvements Mike Schichtel, Chris Bailey, Keith Brill, and David Novak.
Temperature Prediction. ASOS Temperature/Humidity Senor.
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Probabilistic forecasts of (severe) thunderstorms for the purpose of issuing a weather alarm Maurice Schmeits, Kees Kok, Daan Vogelezang and Rudolf van.
Exploring sample size issues for 6-10 day forecasts using ECMWF’s reforecast data set Model: 2005 version of ECMWF model; T255 resolution. Initial Conditions:
Multi-model operational seasonal forecasts for SADC Willem A. Landman Asmerom Beraki Cobus Olivier Francois Engelbrecht.
Intraseasonal TC prediction in the southern hemisphere Matthew Wheeler and John McBride Centre for Australia Weather and Climate Research A partnership.
June 19, 2007 GRIDDED MOS STARTS WITH POINT (STATION) MOS STARTS WITH POINT (STATION) MOS –Essentially the same MOS that is in text bulletins –Number and.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Quality control of daily data on example of Central European series of air temperature, relative humidity and precipitation P. Štěpánek (1), P. Zahradníček.
1 Climate Test Bed Seminar Series 24 June 2009 Bias Correction & Forecast Skill of NCEP GFS Ensemble Week 1 & Week 2 Precipitation & Soil Moisture Forecasts.
Course Evaluation Closes June 8th.
Model Post Processing. Model Output Can Usually Be Improved with Post Processing Can remove systematic bias Can produce probabilistic information from.
CC Hennon ATMS 350 UNC Asheville Model Output Statistics Transforming model output into useful forecast parameters.
Use of Mesoscale Ensemble Weather Predictions to Improve Short-Term Precipitation and Hydrological Forecasts Michael Erickson 1, Brian A. Colle 1, Jeffrey.
The LAMP/HRRR MELD FOR AVIATION FORECASTING Bob Glahn, Judy Ghirardelli, Jung-Sun Im, Adam Schnapp, Gordana Rancic, and Chenjie Huang Meteorological Development.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
NDFDClimate: A Computer Application for the National Digital Forecast Database Christopher Mello WFO Cleveland.
Overview of WG5 activities and Conditional Verification Project Adriano Raspanti - WG5 Bucharest, September 2006.
A study on the spread/error relationship of the COSMO-LEPS ensemble Purpose of the work  The spread-error spatial relationship is good, especially after.
Using Ensemble Model Output Statistics to Improve 12-Hour Probability of Precipitation Forecasts John P. Gagan NWS Springfield, MO Chad Entremont NWS Jackson,
Encast Global forecasting.
Medium Range Forecasting at the Weather Prediction Center (WPC) –
of Temperature in the San Francisco Bay Area
A few examples of heavy precipitation forecast Ming Xue Director
LEPS VERIFICATION ON MAP CASES
Update on the Northwest Regional Modeling System 2013
Course Evaluation Now online You should have gotten an with link.
University of Washington Ensemble Systems for Probabilistic Analysis and Forecasting Cliff Mass, Atmospheric Sciences University of Washington.
Course Evaluation Now online You should have gotten an with link.
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
Verification of LAMI: QPF over northern Italy and vertical profiles
Model Post Processing.
S.Alessandrini, S.Sperati, G.Decimi,
FORECASTING EASTERN US WINTER STORMS Are We Getting Better and Why?
Course Evaluation Now online You should have gotten an with link.
Hydrologic ensemble prediction - applications to streamflow and drought Dennis P. Lettenmaier Department of Civil and Environmental Engineering And University.
Climate Graphs What do they tell us?.
Improving forecasts through rapid updating of temperature trajectories and statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir and Alex.
Sub-seasonal prediction at ECMWF
Post Processing.
452 Precipitation.
Progress in Seasonal Forecasting at NCEP
The Importance of Reforecasts at CPC
N. Voisin, J.C. Schaake and D.P. Lettenmaier
Quantitative verification of cloud fraction forecasts
Caio Coelho (Joint CBS/CCl IPET-OPSLS Co-chair) CPTEC/INPE, Brazil
COSMO-LEPS Verification
INSTITUTE OF METEOROLOGY AND WATER MANAGEMENT
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Alex Gallagher and Dr. Robert Fovell
Demand Estimation Seasonal Normal
MOS What does acronym stand for ?
Some Verification Highlights and Issues in Precipitation Verification
Environment Canada Monthly and Seasonal Forecasting Systems
Atmospheric Sciences 452 Spring 2019
Model Output Statistics
Rapid Adjustment of Forecast Trajectories: Improving short-term forecast skill through statistical post-processing Nina Schuhen, Thordis L. Thorarinsdottir.
Forecasting the Weather
Short Range Ensemble Prediction System Verification over Greece
Ryan Kang, Wee Leng Tan, Thea Turkington, Raizan Rahmat
Presentation transcript:

MOS Developed by and Run at the NWS Meteorological Development Lab (MDL) Full range of products available at: http://www.nws.noaa.gov/mdl/synop/index.php

Global Ensemble MOS Ensemble MOS forecasts are based on the 0000 UTC run of the GFS Global model ensemble system. These runs include the operational GFS, a control version of the GFS (run at lower resolution), and 20 additional runs. Older operational GFS MOS prediction equations are applied to the output from each of the ensemble runs to produce 21 separate sets of alphanumeric bulletins in the same format as the operational MEX message.

Gridded MOS The NWS needs MOS on a grid for many reasons, including for use in their IFPS analysis/forecasting system. The problem is that MOS is only available at station locations. To deal with this, NWS created Gridded MOS. Takes MOS at individual stations and spreads it out based on proximity and height differences. Also does a topogaphic correction dependent on reasonable lapse rate.

Current “Operational” Gridded MOS

MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between human and MOS forecasts.

Cool Season Mi. Temp – 12 UTC Cycle Average Over 80 US stations

Prob. Of Precip.– Cool Season (0000/1200 UTC Cycles Combined)

MOS Won the Department Forecast Contest in 2003 For the First Time!

Average or Composite MOS There has been some evidence that an average or consensus MOS is even more skillful than individual MOS output. Vislocky and Fritsch (1997), using 1990-1992 data, found that an average of two or more MOS’s (CMOS) outperformed individual MOS’s and many human forecasters in a forecasting competition.

Some Questions How does the current MOS performance…driven by far superior models… compare with NWS forecasters around the country. How skillful is a composite MOS, particularly if one weights the members by past performance? How does relative human/MOS performance vary by forecast projection, region, large one-day variation, or when conditions vary greatly from climatology? Considering the results, what should be the role of human forecasters?

This Study August 1 2003 – August 1 2004 (12 months). 29 stations, all at major NWS Weather Forecast Office (WFO) sites. Evaluated MOS predictions of maximum and minimum temperature, and probability of precipitation (POP).

National Weather Service locations used in the study.

Forecasts Evaluated NWS Forecast by real, live humans EMOS: Eta MOS NMOS: NGM MOS GMOS: GFS MOS CMOS: Average of the above three MOSs WMOS: Weighted MOS, each member is weighted by its performance during a previous training period (ranging from 10-30 days, depending on each station). CMOS-GE: A simple average of the two best MOS forecasts: GMOS and EMOS

The Approach: Give the NWS the Advantage! 08-10Z-issued forecast from NWS matched against previous 00Z forecast from models/MOS. NWS has 00Z model data available, and has added advantage of watching conditions develop since 00Z. Models of course can’t look at NWS, but NWS looks at models. NWS Forecasts going out 48 (model out 60) hours, so in the analysis there are: Two maximum temperatures (MAX-T), Two minimum temperatures (MIN-T), and Four 12-hr POP forecasts.

Temperature Comparisons

Temperature MAE (F) for the seven forecast types for all stations, all time periods, 1 August 2003 – 1 August 2004.

Large one-day temp changes MAE for each forecast type during periods of large temperature change (10F over 24-hr), 1 August 2003 – 1 August 2004. Includes data for all stations.

MAE for each forecast type during periods of large departure (20F) from daily climatological values, 1 August 2003 – 1 August 2004.

Number of days each forecast is the most accurate, all stations. In (a), tie situations are counted only when the most accurate temperatures are exactly equivalent. In (b), tie situations are cases when the most accurate temperatures are within 2F of each other. Looser Tie Definition

Number of days each forecast is the least accurate, all stations. In (a), tie situations are counted only when the least accurate temperatures are exactly equivalent. In (b), tie situations are cases when the least accurate temperatures are within 2F of each other. Looser Tie Definition

Highly correlated time series Time series of MAE of MAX-T for period one for all stations, 1 August 2003 – 1 August 2004. The mean temperature over all stations is shown with a dotted line. 3-day smoothing is performed on the data.

Cold spell Time series of bias in MAX-T for period one for all stations, 1 August 2003 – 1 August 2004. Mean temperature over all stations is shown with a dotted line. 3-day smoothing is performed on the data.

MAE for all stations, 1 August 2003 – 1 August 2004, sorted by geographic region. MOS Seems to have the most problems at high elevation stations.

Bias for all stations, 1 August 2003 – 1 August 2004, sorted by geographic region.

Precipitation Comparisons

Brier Scores for Precipitation for all stations for the entire study period.

Brier Score for all stations, 1 August 2003 – 1 August 2004 Brier Score for all stations, 1 August 2003 – 1 August 2004. 3-day smoothing is performed on the data.

Precipitation Brier Score for all stations, 1 August 2003 – 1 August 2004, sorted by geographic region.

Reliability diagrams for period 1 (a), period 2 (b), period 3 (c) and period 4 (d).

The End http://www.atmos.washington.edu/~jbaars/mos_vs_nws.html