Presentation is loading. Please wait.

Presentation is loading. Please wait.

CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS

Similar presentations


Presentation on theme: "CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS"— Presentation transcript:

1 CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS lakshman@ou.edu

2 In short: 1. Find observed CI using radar echoes aloft 2. Compare to CI forecasts from UAH and UW 3. Find hits, misses, false alarms 4. Preliminary results 5. Discussion

3 From radar data aloft 1. How observed CI was determined

4 Observed CI  For verification purposes, need a “truth” field  Independent of way in which CI is detected  Not tied to “objects”  Based on multi-radar reflectivity at -10C isotherm  Reflectivity aloft, associated with graupel formation  Good indication on convection  Less contaminated by clutter, biological echoes The multi-radar reflectivity is QC’ed, but QC is not perfect

5 Reflectivity at -10C on 4/4/2011  Approx. 1km resolution over CONUS

6 Classifying CI  Define convection as:  Reflectivity at -10C exceeds 35 dBZ  New convection:  Was below 35 dBZ in previous image  Images are 5 minutes apart  Done on a pixel-by-pixel basis  But allow for growth of ongoing convection

7 Model verification  The CI detection algorithm is now running realtime  Being used to verify NSSL-WRF model forecasts of CI

8 Aside: model verification  Probability of CI in one hour very similar  But time evolution different

9 Real time: Image at t0

10 Real time: Image at t1

11 Real time: Observed CI

12 Methodology  Take image at t0 and warp it to align it with the image at t1  Warping limited to a 5 pixel movement  Determined by cross-correlation with a smoothness constraint imposed on it  5 pixels in 5 min  60kmph maximum movement  Then, do a neighborhood search  Pixels above 35 dBZ with no pixel above 35 dBZ within 3km of aligned image is “New Convection”

13 Example: Image at t0

14 Example: Image at t1

15 Example: Image at t0 aligned to t1

16 Classification

17 Definition of Observed CI  Computed CI using 4 different distance thresholds:  3 km (as described)  5 km  15 km  25 km  The 15 km threshold means that a new CI pixel would have to be at least 15 km from existing convection to considered new  In the HWT, this is what forecasters tended to like  What I will use for scoring

18 Significant cells?  One possible problem is that even one pixel counts as CI  So, also tried to look for at least 13 km^2 cells  This will be called ObservedCIv2  Tends to find only significant cells (or cells after they have grown a little bit).  Started doing this after some feedback on this point Not available for all days Can go back and recompute, but doesn’t seem to make much difference to final scores

19 By finding distance between centroids 2. Comparing Observed to Forecast

20 Computing distance  Take the ObservedCI, SatCast and UWCI grid points  Find contiguous pixels and call it an object  Find centroid of those objects  Use storm motion derived from radar echoes and model 500mb wind field  Compute distance between each ObservedCI centroid and each forecast CI centroid

21 Distance computation  Distance is computed as follows:  If observed CI is outside time window of forecast CI (- 15 to +45 min), then dist=MAXDIST  Project forecast CI to time of observed CI Using storm motion field  Compute Euclidean distance in lat-lon degrees  MAXDIST was set to be 100 km  Pretty generous

22 Two ways: Hungarian match and distance 3. Scoring

23 Scoring: Hungarian Match  Create cost matrix of distance between each pair  Observed CI to forecast CI  Find best association for each centroid to minimize global sum-of-distances  Any associated pair is a hit  Any unassociated observed CI is a miss  Any unassociated forecast CI is a false alarm

24 Scoring: Neighborhood Match  Consider each observed CI  If there is any forecast CI within MAXDIST, then it is a hit  Otherwise, it is a miss  Consider each forecast CI  If there is no observed CI within MAXDIST, then it is a miss  More generous than the Hungarian Match  Since multiple forecasts can be verified by a single observation

25 Summary of numbers that matter  Observed CI:  35 dBZ  5 pixel warp in 5 minutes  15 pixel isolation for new CI  Significant cells area threshold (ObservedCIv2)  13 km^2  Time Window:  -15 min to +45 min  Distance threshold:  Hits have to be within 100 km

26 Real time images and daily scores 4. Preliminary results

27 Real time  Can see ObservedCI, ObservedCIv2, UAH and UWCI algorithms at:  http://wdssii.nssl.noaa.gov/web/wdss2/products/r adar/civer.shtml http://wdssii.nssl.noaa.gov/web/wdss2/products/r adar/civer.shtml

28 Example

29 Verification dataset  Dataset of centroids over Spring experiment is available at: ftp://ftp.nssl.noaa.gov/users/lakshman/civerification.tgz  Contains:  All ObservedCI, SatCast and UWCI centroids  ObservedCIv2 for when we started creating them  Results of matching and skill scores by day

30 Example result for June 10, 2011  UAH  UWCI  These scores are typical

31 Only significant cells (ObservedCIv2)  UAH  UWCI

32 5. Discussion

33 Possible reason for low values  Could be a factor of the cirrus mask  Computing scores without taking the mask into account is problematic  Because mask is so widespread, most radar-based CI happens under the mask


Download ppt "CI VERIFICATION METHODOLOGY & PRELIMINARY RESULTS"

Similar presentations


Ads by Google