Presentation is loading. Please wait.

Presentation is loading. Please wait.

Director of Forecasting

Similar presentations


Presentation on theme: "Director of Forecasting"— Presentation transcript:

1 Director of Forecasting
ERCOT Workshop Austin, TX June 26, 2009 Overview of the Current Status and Future Prospects of Wind Power Production Forecasting for the ERCOT System John W. Zack Director of Forecasting AWS Truewind LLC Albany, New York

2 Overview How Forecasts are Produced WGR Data Issues
Overview of forecasting technology, issues and products ERCOT-specific issues and forecast products WGR Data Issues Forecast Performance Recent performance statistics Comparison to other regions Analysis of cases with much worse than average performance Road to Improved Forecasts

3 How Forecasts are Produced
Forecasting Tools Input Data Configuration for ERCOT ERCOT Forecast Products

4 Major Components of State-of-the-Art Forecast Systems
Input Data, Forecast Model Components and Data Flow for a State-of-the-Art Forecast System Combination of physics- based (NWP) and statistical models Diverse set of input data with widely varying characteristics Importance of specific models and data types vary with look-ahead period Forecast providers vary significantly in how they use forecast models and input data © 2009 AWS Truewind, LLC

5 (also known as Numerical Weather Prediction (NWP) Models)
Physics-based Models (also known as Numerical Weather Prediction (NWP) Models) Differential equations for basic physical principles are solved on a 3-D grid Must specify initial values of all variables for each grid cell Simulates the evolution of all of the basic atmospheric variables over a 3-D volume Some forecast providers rely solely on government-run models; others run their own model(s) Roles of Provider-run NWP Models Optimize model configuration for the forecasting of near-surface winds Use higher vertical or horizontal resolution over area of interest Execute simulations more frequently Incorporate data not used by government-run models Execute ensembles customized for near-surface wind forecasting © 2009 AWS Truewind, LLC

6 Roles of Statistical Models
Empirical equations are derived from historical predictor and predictand data (“training sample”) Current predictor data and empirical equations are then used to make forecasts Many different model-generating methods available (linear regression, neural networks etc.) MOS Roles of Statistical Models Account for processes on a scale smaller than NWP grid cells (downscaling) Correct systematic-errors in the NWP forecasts Incorporate additional (onsite and offsite) observational data received after the initialization of most recent NWP model runs not effectively included in NWP simulations Combine met forecasts and power data into power predictions © 2009 AWS Truewind, LLC

7 An Important Consideration: Statistical Model Optimization Criterion
Statistical models are optimized to a specified criterion (either explicitly or by default) Result: prediction errors have different statistical properties depending on the criterion Sensitivity to forecast error is dependent on the user’s application Forecast users and providers should communicate to determine and implement best criterion I = input variables W = weights (from training sample) O = output variables (the forecast) T = target (observed variables) e = forecast error Performance criterion (PC) and an optimizing algorithm needed to obtain W’s in the model A typical PC is mean square error (MSE) - but is it a good one?

8 Roles of Plant Output Models
Relationship of met variables to power production for a specific wind plant Many possible formulations implicit or explicit statistical or physics-based single or multi-parameter Roles of Plant Output Models Facility-scale variations in wind (among turbine sites) Turbine layout effects (e.g. wake effects) Operational factors (availability, turbine performance etc) © 2009 AWS Truewind, LLC

9 Forecast Ensembles Uncertainty present in any forecast method due to
Input data Model type Model configuration Approach: perturb input data and model parameters within their range of uncertainty and produce a set of forecasts (I.e. an ensemble) Roles of Ensembles Composite of ensemble members is often the best performing forecast Ensemble spread provides case-specific measure of forecast uncertainty Can be used to estimate structure of forecast probability distribution © 2009 AWS Truewind, LLC

10 Power Forecast Uncertainty
Two primary sources of uncertainty Position on power curve Predictability of weather regime Useful to distinguish between sources Estimate weather-related uncertainty from spread of forecast ensemble Estimate power curve position uncertainty by developing statistics over all weather regimes A real facility-scale power curve

11 How the Forecasting Problem Changes by Time Scale
Minutes Ahead Large eddys, turbulent mixing transitions Rapid and erratic evolution; very short lifetimes Mostly not observed by current sensor network Forecasting tools: Autoregressive trends Very difficult to beat a persistence forecast Need: Very hi-res 3-D data from remote sensing Hours Ahead Sea breezes, mountain-valley winds, thunderstorms Rapidly changing, short lifetimes Current sensors detect existence but not structure Tools: Mix of autoregressive with offsite data and NWP Outperforms persistence by a modest amount Need: Hi-res 3-D data from remote sensing Days Ahead “Lows and Highs”, frontal systems Slowly evolving, long lifetimes Well observed with current sensor network Tools: NWP with statistical adjustments Much better than a persistence or climatology forecast Need: More data from data sparse areas (e.g. oceans) © 2009 AWS Truewind, LLC

12 Forecast Products Deterministic Predictions Probabilistic Predictions
MW production for a specific time interval (e.g. hourly) Typically the value that minimizes a performance metric (not necessarily the most probable value) Probabilistic Predictions Confidence bands Probability of Exceedance (POE) values Event Forecasts Probability of defined events in specific time windows Most likely (?) values of event parameters (amplitude, duration etc.) Example: large up or down ramps Situational Awareness Identification of significant weather regimes (alerts) Geographic displays of wind & weather patterns to enable event tracking © 2009 AWS Truewind, LLC

13 ERCOT Forecast System Current Planned 9 NWP model runs every 6 hrs
Single NWP model run every 6 hrs No Model Output Statistics (MOS) WGR output model: mixed approach Some statistical using WGR data Some specified power curve Planned 9 NWP model runs every 6 hrs 1 NWP model run every hr (RUC) Model Output Statistics (MOS) Statistical optimized ensemble Statistical power curve: all WGRs

14 ERCOT Forecast Products
Hourly updated 1 to 48 hr ahead forecasts in hourly increments STWPF: ~ most likely value WGRPP: 80% probability of exceedance value For each WGR and the aggregate of all WGRs Delivery Time By 15 minutes past each hour Delivery Vehicles xml files via web services to ERCOT csv files via to each QSE for each WGR for which they schedule csv files and graphical displays on web page for ERCOT access only

15 Overview of Data Quality Issues Examples
Data Issues Overview of Data Quality Issues Examples

16 Overview of WGR Data Issues
Uses of WGR Data Statistically adjust NWP output data Define wind-power relationship Identify recent trends for very short-term (0-4 hrs) forecasts Impact of Data Quality Issues Prevent forecast performance from reaching its potential Degrade forecast performance Types of Issues Curtailment Turbine Availability Missing data Erroneous or locked data Unrepresentative met data Met Data Status # of WGRs Useable Data 35 Very high degree of scatter 12 High degree of scatter 11 Data appears not to be in mph 8 No data or < 10 data points 2 Less than 25% of data received Total 70

17 Data Quality Example #1: Useable Data
The Good Data available through most of the power curve Well-defined wind speed vs power production relationship Modest amount of scatter Only a few outlier points Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

18 Data Quality Example #2: Useable But Limited Data
Generally good data but limited amounts of data in the upper range of the power curve Light winds Curtailment during high wind periods Difficult to obtain an accurate wind speed vs power relationship for higher wind speeds The Not as Good Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

19 Data Quality Example #3: Questionable scaling of met data
Wind speeds do not appear to be in mph - perhaps m/s but not clear Problem: scaling method is not known and thus data is not useable In addition to the scaling issues this example has a moderately high amount of scatter The Bad Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

20 Data Quality Example #4: High Amount of Scatter
A high amount of scatter in the wind speed vs power production relationship is evident Data is not useable since its not clear which data has issues Possible causes Turbine availability issues? Erroneous or unrepresentative anemometer data? Another Type of Bad Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

21 Data Quality Example #5: Extremely High Amount of Scatter
Enormous amount of scatter in the wind speed vs power production relationship Not useable Difficult to define a WGR-scale power curve Possible causes Turbine availability issues? Erroneous or unrepresentative anemometer data? The Ugly Hourly average power production vs hourly average wind speed for an ERCOT WGR for non-curtailed hours during the period May 1, 2009 to June 14, 2009

22 Issues ERCOT Results: Wind Speed and Power Case Examples
Forecast Performance Issues ERCOT Results: Wind Speed and Power Case Examples

23 Amount and Diversity of Regional Aggregation Impacts Apparent Forecast Performance
Example: Alberta Wind Forecasting Pilot Project 1 year: May 2007-April 2008 3 forecast providers 12 wind farms (7 existing + 5 future) divided into 4 geographic regions of 3 farms each Hourly 1 to 48 hrs ahead forecasts for farms, regions and system-wide production Regional day-ahead forecast RMSE was 15-20% lower than for the farms System-wide day-ahead forecast RMSE was 40-45% lower than for the individual farms © 2009 AWS Truewind, LLC

24 Impact of Aggregation on Performance Comparisons: Misconceptions
From the Visit Report: “SIPREOLICO provides detailed hourly forecasts up to 48 hours updated every 15 minutes. The accuracy of the forecast is phenomenal: The forecast root mean square error for the 48-hour- ahead forecast is below 5.5% of the installed wind generation capacity. “ Lack of a consideration of the impact of size and diversity of the generation resource leads to misconceptions about relative forecast performance Example: US visitors to the Spanish TSO “RED Electrica” concluded that forecast performance was “phenomenal” The size and diversity of this aggregation is so great that there is a huge aggregation effect and when this is considered the performance is typical RED Electrica System Characteristics: 14,877 MW installed; 575 wind parks Average of about 30 MW capacity/park Peak Generation ~ 10,000 MW © 2009 AWS Truewind, LLC

25 ERCOT Forecast Performance
Wind Speed Power Production

26 System-Wide Wind Speed Forecasts
Parameter Average wind speed over all wind generation areas on the ERCOT system Provides a perspective on forecast performance that is independent of the power data issues (curtailment, availability etc.) Results Very low bias after January Low MAE and RMSE MAE: ~1.2 m/s (15% of avg) RMSE: ~ 1.5 m/s (19% of avg)

27 Day-Ahead Wind Speed Forecast MAE: Comparison Between Regions
Comparison Periods ERCOT: 1/1/09 - 5/31/09 Alberta: 5/1/07- 4/30/08 NYISO: 1/1/09 - 5/31/09 Results ERCOT sites have lower MAE than Alberta sites but higher than NYISO sites ERCOT MAE considerations For 5 higher MAE months; annual MAE probably lower Adversely impacted by wind speed data issues Early version of ERCOT forecast system

28 Power Production Forecast Performance Evaluation Issues
Issue: It is difficult to determine the power production values to use in the forecast evaluation due to curtailment, turbine availability and data quality issues AWST’s Performance Evaluation Datasets QC1: Locked and missing data removed QC2: QC1 & removal of data during ERCOT-specified curtailed periods QC3: QC2 & power curve check Data rejected if |Prep - Ppc| > 30% of cap Attempt to eliminate data with significant unreported availability issues Synthetic Power: Best guess at power production for all hours Reported power if deemed representative Power estimated from power curve if wind speed deemed reliable Power estimated from capacity factor of highest correlated nearby facility

29 System-Wide Power Forecasts: Impact of Data on Bias & MAPE
Lack of consideration of curtailment periods (QC1) leads to much higher Bias & MAPE values in some months (e.g. February) Considerable positive bias remains even with QC3 data despite very low wind speed forecast bias

30 System-Wide Power Forecasts: Day-Ahead MAPE Comparison
Comparison Specs: ERCOT: ~8000 MW; 24 hr ahead Alberta: 355 MW; 24 hr ahead CAISO: 815 MW; day-ahead (PIRP) NYISO: 688 MW; day ahead ERCOT MAPE is lower than that achieved by any forecaster in Alberta ERCOT MAPE is higher than that for CAISO and NYISO aggregates

31 Forecast Performance: Case Examples
Examples of difficult cases from 2009

32 April 29, 2009 Case (forecast delivered 3:15 PM CDT April 28)
Large over estimate of the power production from 1 AM to 2 PM Error caused by a poor prediction of the intensification of a storm to the north of Texas and the associated southerly winds Error most likely attributable to large scale weather prediction by the National Weather Service data and models

33 Surface Weather Map: 6 AM
April 29, 2009 Case Surface Weather Map: 6 AM The west to east pressure gradient is too strong and therefore the winds are too strong Ensemble with input from non-NWS (e.g. Canadian) NWP data might help in these type of cases NWP forecast of 70 m wind speed (m/s) for 6 AM (used for 3 PM forecast on 28 April 2009)

34 May 11, 2009 Case (forecast delivered 3:15 PM CDT May 10)
Large over estimate of the power production from 8 AM to 2 PM Error caused by slight errors in the placement of a slow moving frontal zone across central Texas Much of the error associated with higher than forecasted winds in the Sweetwater area

35 May 11, 2009 Case Frontal zone with large horizontal variation in wind speeds and direction was located near the wind generation areas Small errors in frontal placement can lead to large errors in power forecasts An example of a typical high uncertainty situation Ensemble forecast should help anticipate uncertainty and hedge “most likely” forecast

36 May 16, 2009 Case (forecast delivered 3:15 PM CDT May 15)
Large over estimate of the power production from midnight to 5 AM as a cold front approached and passed through the region NWP models overestimated the southerly wind speeds ahead of the front

37 Surface Weather Map: 3 AM
May 16, 2009 Case Surface Weather Map: 3 AM Position and timing of the front is very good However, the NWP model greatly overestimated the southerly wind speeds ahead of the front NWP forecast of 70 m wind speed (m/s) for 3 AM (used for 3 PM forecast on 15 May 2009)

38 Road to Improved Forecasts
Resolve Data Issues Ensembles Regime and Event Specific Methods

39 Anticipated Steps to Improve Forecast Performance
Resolve WGR data issues Obtain maximum information about curtailments and turbine availability Obtain meteorological data from all WGRs Resolve issues with meteorological data for many sites Implement full statistical ensemble system Dependent on quality of WGR power and meteorological data Ensembles will help most in situations with large uncertainty Take advantage of regime-based and event-based techniques Regime-based MOS Event-based MOS

40 Relative Role of NWP and MOS in Day-Ahead Forecast Performance: An Example
The gap between the blue and red lines is a measure of the contribution of WGR-data to forecast performance via MOS The raw NWP forecasts have an average MAE improvement over persistence of ~ 40% and the MOS procedure increases that to ~ 60% The MOS improvement increment varies from week to week (probably regime effect) Weekly power production forecast MAEs for an individual WGR in the eastern US for an 8-week period in the fall of MAEs are scaled by the MAE of a persistence forecast for the entire 8-week period.

41 Impact of Weather Regimes
Example: power production forecasts during AESO’s Alberta Pilot Project Significant winter wind regimes in Alberta were identified for the season Forecast performance was analyzed by regime Shallow cold air (SCA) regime occurs when slowly moving cold air from the N or E undercuts a warmer air mass typically characterized by strong W or SW winds Power production forecast error was much larger for the SCA regime than the non-SCA cases primarily because the characteristics of NWP forecast errors were quite different in SCA and non-SCA regimes © 2007 AWS Truewind, LLC

42 Event-Specific Forecasting Example
Large system-wide ramps on multiple time scales occurred in the 10 PM to Midnight period on March 10, 2009 Caused by a southward propagating cold front Investigated as part of the development of ELRAS (ERCOT Large Ramp Alert System)

43 March 10-11Case: Precursor Conditions
Top: Measured speed at a north Texas WGR Right: NWP-produced map of 15-minute wind speed change at 7:15 PM CDT

44 Use of an Event-Tracking Parameter
Parameters: Distance to and amplitude of maximum positive wind speed change along a radial path from the forecast site Example: parameters indicate a consistent and coherent approach of the feature Approach can be used with NWP model output data and/or offsite measured data

45 Ramp events have different meteorological causes and thus the optimal parameters for tracking and predicting them are different

46 Summary State-of-the-art forecasts are produced from a combination of physics-based and statistical models ERCOT forecasts have been generated from an early version of the forecast system that has minimal reliance on WGR data The ERCOT forecast system will soon be expanded to include an ensemble of NWP models and MOS methods which have heavier reliance on WGR data Thus far, WGR and system data issues have significantly limited the perceived and actual performance of the ERCOT forecasts ERCOT system-wide day-ahead forecasts have lower MAPE than in the Alberta Pilot Project but higher than in California and New York Cases with high system-wide error have often been found to be associated with large errors in government-run model predictions or situations of high uncertainty (multi-model ensembles should help in these cases) A considerable amount of ERCOT-focused forecasting R&D is in progress to improve the forecasts beyond the expected gains from better WGR data and the expanded forecast system


Download ppt "Director of Forecasting"

Similar presentations


Ads by Google