Presentation is loading. Please wait.

Presentation is loading. Please wait.

Forecasting wind for the renewable energy market Matt Pocernich Research Applications Laboratory National Center for Atmospheric Research

Similar presentations


Presentation on theme: "Forecasting wind for the renewable energy market Matt Pocernich Research Applications Laboratory National Center for Atmospheric Research"— Presentation transcript:

1 Forecasting wind for the renewable energy market Matt Pocernich Research Applications Laboratory National Center for Atmospheric Research pocernic@ucar.edu

2 Why is forecasting wind hard? Turbulence Inherently stochastic process Issue of scales Consequence – attempts at improving a deterministic forecasts have physical limitations. 9/15/2015ENVR Workshop - October 2010

3 Results of Spectral Decomposition (Rife, Davis, Liu 2004 MWR) 9/15/2015ENVR Workshop - October 2010

4 Needs of the customer – a typical power curve Power 9/15/2015ENVR Workshop - October 2010

5 Outline Components of a typical numerical weather prediction system. Ensembles Forecasting system Methods of post processing Verification of wind forecasts Excitement to come 9/15/2015ENVR Workshop - October 2010

6 Dynamic Integrated Forecast System - DICast TM Performance Ensemble Input + Dynamic Weighting + Bias Correction + Dynamic MOS = Optimized Forecast 9/15/2015ENVR Workshop - October 2010

7 RTFDDA Regional-scale NWP models WRF / MM5 MESONETs GOES Wind Prof 4-D Continuous Data Assimilation and Forecasts Radars Etc. ACARS Forecast Cold start t FDDA Weather observations WRF/MM5 Modified WRF/MM5: Dx/Dt =... + GW (x obs – x model ) where x = T, U, V, Q, P1, P2 … W is weight function All WMO/GTS Farm Met Yuabo Liu et al. Wind Energy Prediction - R & D Workshop. 11 -12 May 2010 © 2010 University Corporation for Atmospheric Research 9/15/2015ENVR Workshop - October 2010

8 Assimilation of Wind Farm Data Met Tower wind spd/dir Turbine hub wind spd Data QC and processing Data combining and reformat WRF RTFDDA All other weather Observations Other met-tower weather Observations Yuabo Liu et al. Wind Energy Prediction - R & D Workshop. 11 -12 May 2010 © 2010 University Corporation for Atmospheric Research 9/15/2015ENVR Workshop - October 2010

9 22 March 2007Load Forecasting Workshop NWP Forecast Solve the equations that describe the evolution of the atmosphere We cannot solve the equations analytically: Discretize them. Horizontally and vertically Close the equations by parameterizing small/fast physical processes.

10 22 March 2007Load Forecasting Workshop

11 Ensemble Forecasting – a very vague term Random initial conditions Multi-physics model Multi-model (Poor man’s ensemble) Time lag ensemble 9/15/2015ENVR Workshop - October 2010

12 Wind Speed (mph) Forecast Hour Multiple forecasts of the same event, designed to characterize uncertainty Observational error Model selection error Parameterization error Run at many weather centers and forecasting companies Forecast Hour 9/15/2015ENVR Workshop - October 2010

13 Challenges with ensembles Tend to be under dispersive (not enough spread.) Calibration for both reliability and sharpness. Some methods include ensembleBMA (Chris Frayley + UW) quantile regression (Hopson) ensemble Kalman Filter (more later from Luca) 9/15/2015ENVR Workshop - October 2010

14 Ensemble BMA Bias removal of each member using linear regression. Estimates weights and variance for each ensemble member which minimizes continuous rank probability score. Essentially, dresses each ensemble member with a distribution. Traditionally uses Gaussian distribution. For winds, use gamma. Key work by Adrian Raftery, Tilman Gneiting, MacLean Sloughter and Chris Frayley (UW). 9/15/2015ENVR Workshop - October 2010

15 Example of ensemble BMA forecasts 9/15/2015ENVR Workshop - October 2010

16 Regime switching Algorithms (From M. Hering) 9/15/2015ENVR Workshop - October 2010

17 9/15/2015ENVR Workshop - October 2010

18 9/15/2015ENVR Workshop - October 2010

19 9/15/2015ENVR Workshop - October 2010

20 9/15/2015ENVR Workshop - October 2010

21 9/15/2015ENVR Workshop - October 2010

22 9/15/2015ENVR Workshop - October 2010

23 Key Verification Issues The most common verification metrics are mean absolute error, RMSE and Bias. These do not address concerns like ramping events. New statistical forecasts are created every 15 minutes with new physical model runs every 3 hours. We don’t have a developed concept or metrics for consistency. Forecast value – cost/benefits is complicated. Value of weather forecast is used with load forecast. There are humans in the loop. 9/15/2015ENVR Workshop - October 2010

24 Contingency table statistics The most fundamental verification methods involve statistics derived from a contingency table. This requires forecasts and observations be categorized into discrete bins. Basic contingency table statistics include hit rate, false positive rate, bias, false negative rate and percent correct. Changes in power can be classified in such a way in the following manner. An increase (or decrease) in a forecast accompanied by a “similar” increase (or decrease) in observed power is a good forecast. A change forecast in power, but not observed is a false positive. A change observed, but not forecast is a false negative A forecast of no-change, associated with no change is considered a good, negative forecast. The definition of a good forecast can be modified. Regions do not have to be defined by angular regions. 9/15/2015ENVR Workshop - October 2010

25 Forecast vs. Observed Changes 9/15/2015ENVR Workshop - October 2010 Agree in magnitude and direction Disagree in magnitude and direction Small values forecast and observed False Positive and False Negative

26 Regions translated into a contingency table 9/15/2015ENVR Workshop - October 2010 Observed Forecast Up RampNeutralDown Ramp Up Ramp Neutral Down Ramp

27 Changes in power forecast by short term forecast from in the first 3 hours Percent Correct 42% Gerrity Skill Score 0.24 9/15/2015ENVR Workshop - October 2010 Observed Forecast Up Ramp Neutral Down Ramp Up Ramp 970 322228 Neutral573149603 Down Ramp 300435696

28 0 hour lead time, 1- 3 hour duration 9/15/2015ENVR Workshop - October 2010 GSS = 0.21 PC = 48% GSS = 0.21 PC = 39% GSS = 0.24 PC = 42%

29 Criticisms of this approach From C. Ferro - U.Exeter The classification of the observation as either neutral, down ramp or up ramp depends on the value of the forecast. That seems weird! It must lead to some difficulties in interpreting any analysis of the table. Much easier to define categories using boundaries that are parallel to the observation and forecast axes. My initial reaction to deal with this is not to use contingency tables at all but to model the continuous joint distribution. 9/15/2015ENVR Workshop - October 2010

30 More data + better data = more fun New instruments – LIDAR and SODAR Better quality observations from existing stations. Improvements in sharing data. High quality networks of tall towers. (BPA) 9/15/2015ENVR Workshop - October 2010

31 Space – Time processes? 9/15/2015ENVR Workshop - October 2010

32 Concluding Remarks 9/15/2015ENVR Workshop - October 2010


Download ppt "Forecasting wind for the renewable energy market Matt Pocernich Research Applications Laboratory National Center for Atmospheric Research"

Similar presentations


Ads by Google