Presentation is loading. Please wait.

Presentation is loading. Please wait.

Model Post Processing.

Similar presentations


Presentation on theme: "Model Post Processing."— Presentation transcript:

1 Model Post Processing

2 Model Output Can Usually Be Improved with Post Processing
Can remove systematic bias Can produce probabilistic information from deterministic information and historical performance. Can provide forecasts for parameters that a model is incapable of simulating successfully due to resolution or physics issues (e.g., shallow fog)

3 Model Output Statistics (MOS)
Model Output Statistics was the first post-processing method used by the NWS (1969) Based on multiple linear regression. Essentially unchanged over 40 years. Does not consider non-linear relationships between predictors and predictands. Does take out much of systematic bias.

4 Based on Multiple Linear Regression
Y=a0 + a1 X1+ a2X2 + …

5 Day 2 (30-h) GFS MOS Max Temp Equation for KSLC (Cool Season – 0000 UTC cycle)
Predictor (XN) Coeff. (aN) 1 2-m Temperature (21-h proj.) 1.7873 2 2-m Dewpoint (21-h proj.) 0.1442 3 2-m Dewpoint (12-h proj.) 4 2-m Dewpoint (27-h proj.) 0.1252 5 Observed Temperature (03Z) 0.0354 6 850-mb Vertical Velocity (21-h proj.) 7 925-mb Wind Speed (15-h proj.) 0.6024 8 Sine Day of Year 1.7111 9 700-mb Wind Speed (15-h proj.) 0.2701 10 Sine 2*DOY 1.5110

6 Day 2 (42-h) GFS MOS Max Temp Equation for KUNV (Warm Season – 1200 UTC cycle)
Predictor (XN) Coeff. (aN) 1 2-m Temperature (33-h proj.) 0.9249 2 2-m Dewpoint (33-h proj.) 0.5751 3 950-mb Dewpoint (24-h proj.) 0.4026 4 950-mb Rel. Humidity (27-h proj.) 5 850-mb Dewpoint (39-h proj.) 6 Observed Dewpoint (15Z) 7 Observed Temperature (15Z) 0.1270 8 1000-mb Rel. Humidity (24-h proj.) 0.0027 9 Sine Day of Year 0.9763 10 mb Thickness (45-h proj.) 0.0057

7 Day 2 (30-h) GFS MOS Min Temp Equation for KDCA (Cool Season - 1200 UTC cycle)
Predictor (XN) Coeff. (aN) 1 2-m Temperature (21-h proj.) 0.9700 2 1000-mb Temperature (12-h proj.) 0.3245 3 2-m Dewpoint (27-h proj.) 0.1858 4 2-m Relative Humidity (27-h proj.) 5 2-m Relative Humidity (15-h proj.) 6 975-mb Wind Speed (21-h proj.) 7 Observed Temperature (15Z) 0.1584 8 Sine Day of Year Develop / Evaluate

8

9 MOS Developed by and Run at the NWS Meteorological Development Lab (MDL)
Full range of products available at:

10

11

12 Global Ensemble MOS Ensemble MOS forecasts are based on the 0000 UTC run of the GFS Global model ensemble system. These runs include the operational GFS, a control version of the GFS (run at lower resolution), and 20 additional runs. Older operational GFS MOS prediction equations are applied to the output from each of the ensemble runs to produce 21 separate sets of alphanumeric bulletins in the same format as the operational MEX message.

13 Gridded MOS The NWS needs MOS on a grid for many reasons, including for use in their IFPS analysis/forecasting system. The problem is that MOS is only available at station locations. To deal with this, NWS created Gridded MOS. Takes MOS at individual stations and spreads it out based on proximity and height differences. Also does a topogaphic correction dependent on reasonable lapse rate.

14

15

16 Current “Operational” Gridded MOS

17 Localized Aviation MOS Program (LAMP)
Hourly updated statistical product Like MOS but combines: MOS guidance the most recent surface observations simple local models run hourly GFS output

18 Practical Example of Solving a LAMP Temperature Equation
Y = b + a1x1 + a2x2 + a3x3 + a4x4 Y = LAMP temperature forecast Equation Constant b = Predictor x1 = observed temperature at cycle issuance time (value 66.0) Predictor x2 = observed dew point at cycle issuance time (value 58.0) Predictor x3 = GFS MOS temperature (value 64.4) Predictor x4 = GFS MOS dew point (value 53.0)

19 Theoretical Model Forecast Performance of LAMP, MOS, and Persistence

20

21 Verification of LAMP 2-m Temperature Forecasts

22 MOS Performance MOS significantly improves on the skill of model output. National Weather Service verification statistics have shown a narrowing gap between human and MOS forecasts.

23 Cool Season Mi. Temp – 12 UTC Cycle
Average Over 80 US stations

24 MOS Won the Department Forecast Contest in 2003 For the First Time!

25

26

27

28 Average or Composite MOS
There has been some evidence that an average or consensus MOS is even more skillful than individual MOS output. Vislocky and Fritsch (1997), using data, found that an average of two or more MOS’s (CMOS) outperformed individual MOS’s and many human forecasters in a forecasting competition.

29 UW MOS Study August 1 2003 – August 1 2004 (12 months).
29 stations, all at major NWS Weather Forecast Office (WFO) sites. Evaluated MOS predictions of maximum and minimum temperature, and probability of precipitation (POP).

30 National Weather Service locations used in the study.

31 Forecasts Evaluated NWS Forecast by real, live humans EMOS: Eta MOS
NMOS: NGM MOS GMOS: GFS MOS CMOS: Average of the above three MOSs WMOS: Weighted MOS, each member is weighted by its performance during a previous training period (ranging from days, depending on each station). CMOS-GE: A simple average of the two best MOS forecasts: GMOS and EMOS

32 The Approach: Give the NWS the Advantage!
08-10Z-issued forecast from NWS matched against previous 00Z forecast from models/MOS. NWS has 00Z model data available, and has added advantage of watching conditions develop since 00Z. Models of course can’t look at NWS, but NWS looks at models. NWS Forecasts going out 48 (model out 60) hours, so in the analysis there are: Two maximum temperatures (MAX-T), Two minimum temperatures (MIN-T), and Four 12-hr POP forecasts.

33 Temperature Comparisons

34 Temperature MAE (F) for the seven forecast types for all stations, all time periods, 1 August – 1 August 2004.

35 Precipitation Comparisons

36 Brier Scores for Precipitation for all stations for the entire study period.

37 They don’t do traditional MOS!

38 Dynamic MOS Using Multiple Models
MOS equations are updated frequently, not static like the NWS. Multiple model and observation inputs Example: DiCast used by the Weather Channel and Accuweather, Developed by NCAR

39 Dynamical MOS of MOSs

40 DICAST skill is quite good

41 ForecastAdvisor.com

42

43

44 New NWS Approach: National Blend of Models
Ghe National Blend of Models (NBM) is a nationally consistent and skillful suite of calibrated forecast guidance based on a blend of both NWS and non-NWS numerical weather prediction model data and post-processed model guidance. The goal of the NBM is to create a highly accurate, skillful and consistent starting point for the gridded forecast.

45 Blend This first version used 3 models (GFS, GEFS mean, CMCE mean) and provided temperature, wind, and sky cover over the CONUS region two times a day. Added more parameters recently 2.5 km grid

46

47 There are many other post-processing approaches
Neural nets Attempts to duplicate the complex interactions between neurons in the human brain.

48

49 Bayesian Model Averaging (BMA)
A good way to optimize the use of ensembles of forecasts to provide calibrated probabilistic guidance. Can weight models and the variability by previous performance.

50

51

52 Machine Learning/Artificial Intelligence


Download ppt "Model Post Processing."

Similar presentations


Ads by Google