Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.

Similar presentations


Presentation on theme: "Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak."— Presentation transcript:

1 Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak 1 1 NOAA/NWS/Hydrometeorological Prediction Center 2 I.M. Systems Group, Inc.

2 What is Object-Oriented Verification? Considers the relationship between individual precipitation areas instead of performance over an entire forecast grid Methods – Neighborhood – Scale separation – Features based – Field deformation

3 Why use Object-Oriented Verification? Avoids “double penalty” problem – Traditional verification penalizes forecast both for missing the observed precipitation and for giving a false alarm Provides additional information about why a forecast was correct or incorrect – Spatial displacement, axis angle difference, etc. Goal is to evaluate forecast quality in a manner similar to a forecaster completing a subjective forecast evaluation

4 Davis et al. (2006)

5 Method for Object-Based Diagnostic Evaluation (MODE) Part of the Model Evaluation Tools (MET) verification package from the Developmental Testbed Center (DTC) Defines “objects” in the forecast and observed fields based on user-defined precipitation thresholds Tries to match each forecast object with an observed object based on the similarity of a variety of object characteristics – Matching determined by user-defined weights placed on a number of parameters – Interest value—objects are matched when their interest value is ≥ 0.70

6 Configuration Parameters Convolution radius Merging threshold Interest threshold Centroid distance Convex hull distance Area ratio Complexity ratio Intensity ratio Area threshold Maximum centroid distance Boundary distance Angle difference Intersection area ratio Intensity percentile

7 MODE Output Forecast ObjectsObserved Objects unmatched objects false alarm miss matched

8 MODE at HPC Running daily at HPC since April 2010 – 24hr QPF – 6hr QPF (September 2010) Supplements traditional verification methods Training opportunities – Provide spatial information about forecast errors – Quantify model biases – COMET COOP project with Texas A&M

9 Forecaster Feedback Too much smoothing of the forecast and observed fields, particularly at 32 km – Sizeable areas of precipitation not identified as objects – Trouble capturing elongated precip areas

10 ForecastObserved HPC Forecast 1” (25.4 mm) threshold Stage IV 1” (25.4 mm) threshold Large forecast and observed areas >1in but only small objects identified

11 Forecaster Feedback Too much smoothing of the forecast and observed fields, particularly at 32 km – Sizeable areas of precipitation not identified as objects – Trouble capturing elongated precip areas Interest value difficult to interpret – Seems to be higher for high resolution models than for operational models

12 Interest value: 1.000 ForecastObserved EAST_ARW ForecastStage IV 0.25” (6.35 mm) threshold

13 Forecaster Feedback Too much smoothing of the forecast and observed fields, particularly at 32 km – Sizeable areas of precipitation not identified as objects – Trouble capturing elongated precip areas Interest value difficult to interpret – Seems to be higher for high resolution models than for operational models Matches between small and large objects have unexpectedly high interest values

14 Forecast HPC Forecast 0.25” (6.35 mm) threshold Stage IV 0.25” (6.35 mm) threshold Observed (Interest value: 0.7958) Why are these objects matched?

15 Forecaster Feedback Too much smoothing of the forecast and observed fields, particularly at 32 km – Sizeable areas of precipitation not identified as objects – Trouble capturing elongated precip areas Interest value difficult to interpret – Seems to be higher for high resolution models than for operational models Matches between small and large objects have unexpectedly high interest values What is the line around some groups of objects?

16 Forecast EAST_NMM Forecast 0.25” (6.35 mm) threshold Observed Stage IV 0.25” (6.35 mm) threshold What does line around objects mean?

17 Configuration Changes Eliminate area threshold requirement* – GOAL: prevent small objects (<10 grid squares) from being automatically removed from the analysis Increase weighting on boundary distance parameter – GOAL: give more credit to objects that are in close proximity to one another Increase weighting on area ratio parameter – GOAL: prevent very large objects from being matched with very small objects Hazardous Weather Testbed configuration* Iowa State configuration * operational only * high resolution only

18 EAST_NMM 6hr precip forecast valid 12Z 9 June 2010

19 6hr accumulated precip ending 12Z 9 June 2010

20 Original Configuration (0.25 inch threshold) Forecast ObjectsObserved Objects Why are these objects matched? (Interest value: 0.7671)

21 Configuration Change: Increase Boundary Distance Parameter Weight (0.25 inch threshold) Forecast ObjectsObserved Objects Objects are still matched (Interest value: 0.8109)

22 Configuration Change: Increase Area Ratio Parameter Weight (0.25 inch threshold) Forecast ObjectsObserved Objects Objects are now unmatched (Interest value: 0.6295)

23 Configuration Change: Increase Both Boundary Distance and Area Ratio Parameter Weight (0.25 inch threshold) Forecast ObjectsObserved Objects Objects remain unmatched (Interest value: 0.6882)

24 Hazardous Weather Testbed Configuration (0.25 inch threshold) Forecast ObjectsObserved Objects

25 Iowa State Configuration (0.25 inch threshold) Forecast ObjectsObserved Objects Objects are unmatched (Interest value: N/A)

26 Challenges MODE is highly configurable – Difficult to determine which parameters to change to get the desired results Interest values difficult to understand – Seem to be resolution-dependent – No point of reference for the difference between an interest value of 0.95 and 0.9 – Does interest value of 1.0 indicate a perfect forecast? MODE generates large amounts of data

27 Future Work Determine the ideal configuration to use with 6hr verification – Examine multiple cases across all seasons Make graphical output available online to allow for easier forecaster access Make 24hr verification available in real time for HPC/CPC daily map discussion Investigate MODE performance in cool season events Make better use of text output

28 References Davis, C., B. Brown, and R. Bullock, 2006: Object-based verification of precipitation forecasts. Part I: Methodology and application to mesoscale rain areas. Mon. Wea. Rev., 134, 1772-1784. Gallus, W.A., 2010: Application of object-based verification techniques to ensemble precipitation forecasts. Wea. Forecasting, 25, 144- 158. Gilleland, E. D. Ahijevych, B. G. Brown, B. Casati, and E. E. Ebert, 2009: Intercomparison of spatial forecast verification methods. Wea. Forecasting, 24, 1416-1430. Model Evaluation Tools (MET) was developed at the National Center for Atmospheric Research (NCAR) through grants from the United States Air Force Weather Agency (AFWA) and the National Oceanic and Atmospheric Administration (NOAA). NCAR is sponsored by the United States National Science Foundation.


Download ppt "Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak."

Similar presentations


Ads by Google