Presentation is loading. Please wait.

Presentation is loading. Please wait.

Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.

Similar presentations


Presentation on theme: "Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University."— Presentation transcript:

1 Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University

2 References Baldwin et al. (2005): Development of an automated classification procedure for rainfall systems. Mon. Wea. Rev. Baldwin et al. (2006): Challenges in comparing realistic, high-resolution spatial fields from convective-scale grids. Symposium on the Challenges of Severe Convective Storms 2006 AMS Annual Meeting Baldwin et al. (2003): Development of an events-oriented verification system using data mining and image processing algorithms. AMS 2003 Annual Meeting 3rd Conf. Artificial Intelligence

3 Types of data gridded fields of precipitation or reflectivity GRIB format has been used program expects models and observations to be on the same grid

4 Basic strategy compare forecast “objects” with observed “objects” objects are described by a set of attributes related to morphology, location, intensity, etc. multi-variate “distance” can be defined to measure differences between “objects” combining all attributes “good” forecasts will have small “distances” “bad” forecasts will have large “distances” errors for specific attributes for pairs of matching fcst/obs objects can be analyzed

5 Method find areas of contiguous precipitation greater than a threshold expand those areas by ~20% and connect objects that are within 20km of each other characterize objects by location, mean, variance, size, measures of shape compare every forecast object to every observed object valid at the same time

6 Strengths of approach Attributes related to rainfall intensity and auto- correlation ellipticity were able to classify precipitation systems by morphology (linear/cellular/stratiform) Can define “similar” and “different” objects using as many attributes as deemed important or interesting Allows for event-based verification –categorical –errors for specific classes of precipitation events

7 Weaknesses of approach not satisfied with threshold-based object ID procedure sensitivity to choice of threshold currently no way to determine extent of “overlap” between objects does not include temporal evolution, just doing snapshots

8 Weaknesses, continued… Not clear how to “match” fcst & obs objects –how far off does a fcst have to be to be considered a false alarm? How to weigh different attributes? –250km spatial distance same as 5mm precipitation distance? Do attribute distributions matter? –Forecast of heavy rain and observation of heavy rain is a good forecast, even if magnitude off by 50% –50% error in light rain forecast is more significant

9 Example STAGE II WRF2 CAPS

10 1 mm threshold STAGE II WRF2 CAPS

11 5 mm threshold STAGE II WRF2 CAPS

12 analyze objects > (50 km) 2 WRF2 CAPS STAGE II Area=12700 km 2 mean(ppt)=6.5  ppt  = 14.5 max corr @ 25km =0.06  2 corr @ 25 km =0.97  @ 25 km = 107° lat = 39.6°N lon = 95.1°W Area=55200 km 2 mean(ppt)=8.2  ppt  = 30.6 max corr @ 25km =0.54  2 corr @ 25 km =0.83  @ 25 km = 107° lat = 40.7°N lon = 97.3°W

13 Example STAGE II WRF2 CAPS Area=12700 km 2 mean(ppt)=6.5  ppt  = 14.5 max corr @ 25km =0.06  2 corr @ 25 km =0.97  @ 25 km = 107° lat = 39.6°N lon = 95.1°W Area=55200 km 2 mean(ppt)=8.2  ppt  = 30.6 max corr @ 25km =0.54  2 corr @ 25 km =0.83  @ 25 km = 107° lat = 40.7°N lon = 97.3°W

14 analyze objects > (50 km) 2 WRF2 CAPS STAGE II Area=20000 km 2 mean(ppt)=22.6  ppt  = 535.5 max corr @ 25km =0.47  2 corr @ 25 km =0.82  @ 25 km = 149° lat = 34.3°N lon = 101.3°W Area=60000 km 2 mean(ppt)=9.8  ppt  = 57.7 max corr @ 25km =0.43  2 corr @ 25 km =0.57  @ 25 km = 21° lat = 40.0°N lon = 99.6°W

15 Example STAGE II WRF2 CAPS Area=20000 km 2 mean(ppt)=22.6  ppt  = 535.5 max corr @ 25km =0.47  2 corr @ 25 km =0.82  @ 25 km = 149° lat = 34.3°N lon = 101.3°W Area=60000 km 2 mean(ppt)=9.8  ppt  = 57.7 max corr @ 25km =0.43  2 corr @ 25 km =0.57  @ 25 km = 21° lat = 40.0°N lon = 99.6°W

16 New auto-correlation attributes Replaced ellipticity of AC contours with 2 nd derivative of correlation in vicinity of max corr at specific lags (~25, 50, 75 km every ~10°)

17 Example Find contiguous regions of reflectivity Expand areas by 15% Connect regions within 20km

18 Example Analyze objects > 150 points (~3000 km 2 ) Result: 5 objects Next, determine attributes for each object

19 Attributes Each object is characterized by a vector of attributes, with a wide variety of units, ranges of values, etc. Size: area (number of grid boxes) Location: lat, lon (degrees) Intensity: mean, variance of reflectivity in object Shape: difference between max-min auto-corr at 50, 100, 150 km lags Orientation: angle of max auto-corr at 50, 100, 150 km lags

20 Object comparison each object is represented by a vector of attributes: x = (x 1, x 2, …,x n ) T similarity/dissimilarity measures – measure the amount of resemblance or distance between two vectors Euclidean distance:

21 LEPS Distance = 1 equates to difference between “largest” and “smallest” object for a particular attribute Linear for uniform dist (lat, lon,  ) Have to be careful with  L1-norm:  AC diff = 0.4  F o =.08  F o =.47

22 How to match observed and forecast objects? = false alarm F1F1 O2O2 O3O3 = missed event Objects might “match” more than once… If d i* > d T then false alarm If d *j > d T : missed event …for each observed object, choose closest forecast object d ij = ‘distance’ between F i and O j …for each forecast object, choose closest observed object O1O1 F2F2

23 Example of object verf Fcst_1 ARW 2km (CAPS)Radar mosaic Obs_2 Fcst_2 Obs_1 Object identification procedure identifies 4 forecast objects and 5 observed objects

24 Distances between objects Use d T = 4 as threshold Match objects, find false alarms, missed events O_34O_37O_50O_77O_79 F_255.844.168.949.0311.53 F_276.352.547.186.329.25 F_527.439.114.159.195.45 F_819.396.356.362.775.24

25  =.07  =.08  =.04  =.22  =.04  = -.07 median position errors matching obs object given a forecast object NMM4 ARW2 ARW4

26

27 Average distances for matching fcst and obs objects 1-30h fcsts, 10 May – 03 June 2004 Eta (12km) = 2.12 WRF-CAPS = 1.97 WRF-NCAR = 1.98 WRF-NMM = 2.02

28 With set of matching obs and fcsts Nachamkin (2004) compositing ideas –errors given fcst event –errors given obs event Distributions of errors for specific attributes Use classification to stratify errors by convective mode

29 Automated rainfall object identification Contiguous regions of measurable rainfall (similar to Ebert and McBride 2000)

30 Connected component labeling Pure contiguous rainfall areas result in 34 unique “objects” in this example

31 Expand areas by 15%, connect regions that are within ~20 km Results in 5 objects

32 Useful characterization Attributes related to rainfall intensity and auto-correlation ellipticity were able to produce groups of stratiform, cellular, linear rainfall systems in cluster analysis experiments (Baldwin et al. 2005)

33 New auto-correlation attributes Replaced ellipticity of AC contours with 2 nd derivative of correlation in vicinity of max corr at specific lags (50, 100, 150km, every 10°)

34 How to measure “distance” between objects How to weigh different attributes? –Is 250km spatial distance same as 5mm precipitation distance? Do attribute distributions matter? –Is 55mm-50mm same as 6mm-1mm? How to standardize attributes? –X'=(x-min)/(max-min) –X'=(x-mean)/  –Linear error in probability space (LEPS)

35 Estimate of d T threshold Compute distance between each observed object and all others at the same time d T = 25 th percentile = 2.5 Forecasts have similar distributions 25 th %-ile

36 Fcst_1 NCAR WRF 4km Stage II radar ppt Attributes Area=70000 km 2 mean(dBZ)=0.97  dBZ  = 1.26  corr(50)=1.17  corr(100)=0.99  corr(150)=0.84  =173°  =173°  =173° lat = 40.2°N lon = 92.5°W Fcst_2Obs_1Obs_2 Area=70000 km 2 mean(ppt)=0.60  ppt  = 0.67  corr(50)=0.36  corr(100)=0.52  corr(150)=0.49  =85°  =75°  =65° lat = 44.9°N lon =84.5°W Area=135000 mean(ppt)=0.45  ppt  = 0.57  corr(50)=0.37  corr(100)=0.54  corr(150)=0.58  =171°  =11°  =11° lat = 39.9°N lon = 91.2°W Area=285000 mean(ppt)=0.32  ppt  = 0.44  corr(50)=0.27  corr(100)=0.42  corr(150)=0.48  =95°  =85°  =85° lat = 47.3°N lon = 84.7°W Obs_2 Obs_1


Download ppt "Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University."

Similar presentations


Ads by Google