Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dec. 13, 2003 1 Quality Control of Weather Radar Data National Severe Storms Laboratory & University.

Similar presentations


Presentation on theme: "Dec. 13, 2003 1 Quality Control of Weather Radar Data National Severe Storms Laboratory & University."— Presentation transcript:

1 Dec. 13, 2003 Valliappa.Lakshmanan@noaa.gov 1 Quality Control of Weather Radar Data Valliappa.Lakshmanan@noaa.gov National Severe Storms Laboratory & University of Oklahoma Norman OK, USA http://cimms.ou.edu/~lakshman/

2 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov2 Weather Radar Weather forecasting relies on observations using remote sensors. Models initialized using observations Severe weather warnings rely on real-time observations. Weather radars provide the highest resolution In time: a complete 3D scan every 5-15 minutes In space: 0.5-1 degree x 0.25-1km tilts Vertically: 0.5 to 2 degrees elevation angles

3 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov3 NEXRAD – WSR-88D Weather radars in the United States Are 10cm Doppler radars Measure both reflectivity and velocity. Spectrum width information also provided. Very little attenuation with range Can “see” through thunderstorms Horizontal resolution 0.95 degrees (365 radials) 1km for reflectivity, 0.25km for velocity Horizontal range 460km surveillance (reflectivity-only) scan 230km scans at higher tilts, and velocity at lowest tilt.

4 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov4 NEXRAD volume coverage pattern The radar sweeps a tilt. Then moves up and sweeps another tilt. Typically collects all the moments at once Except at lowest scan The 3dB beam width is about 1-degree.

5 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov5 Beam path Path of the radar beam slightly refracted earth curvature Standard atmosphere: 4/3 Anamalous propagation Beam heavily refracted Non-standard atmospheric condition Ground clutter: senses ground.

6 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov6 Anomalous Propagation Buildings near the radar. Reflectivity values correspond to values typical of hail. Automated algorithms severely affected.

7 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov7 AP + biological North of the radar is some ground-clutter. The light green echo probably corresponds to migrating birds. The sky is actually clear.

8 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov8 AP + precipitation AP north of the radar A line of thunderstorms to the east of the radar. Some clear-air return around the radar.

9 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov9 Small cells embedded in rain The strong echoes here are really precipitation. Notice the smooth green area.

10 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov10 Not rain This green area is not rain, however. Probably biological.

11 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov11 Clear-air return Clear-air return near the radar Mostly insects and debris after the thunderstorm passed through.

12 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov12 Chaff The high reflectivity lines are not storms. Metallic strips released by the military.

13 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov13 Terrain The high-reflectivity region is actually due to ice on the mountains. The beam has been refracted downward.

14 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov14 Radar Data Quality Radar data is high resolution, and is very useful. However, it is subject to many contaminants. Human users can usually tell good data from bad. Automated algorithms find it difficult to do so.

15 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov15 Motivation Why improve radar data quality? McGrath et al (2002) showed that the mesocyclone detection algorithm (Stumpf et al, Weather and Forecasting, 1999) produces the majority of its false detections in clear-air. The presence of AP degrades the performance of a storm identification and motion estimation algorithm (Lakshmanan et al, J. Atmos. Research, 2003)

16 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov16 Quality Control of Radar Data An extensively studied problem. Simplistic approaches: Threshold the data (low=bad) High=bad for AP, terrain, chaff Low=good in mesocylones, hurricane eye, etc. Vertical tilt tests Works for AP Fails farther from the radar, shallow precipitation.

17 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov17 Image processing techniques Typically based on median filtering reflectivity data Removes clear-air return, but fails for AP. Fails in spatially smooth clear-air return. Smoothes the data Insufficiently tested techniques Fractal techniques. Neural network approaches.

18 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov18 Steiner and Smith Journal of Applied Meteorology, 2002 A simple rule-base. Introduced more sophisticated measures Echo top: the highest tilt that has at least 5dBZ. Works mostly. Fails in heavy AP, shallow precipitation. Inflections Measure of variability within a local neighborhood of pixel. A texture measure suited to scalar data. Their hard thresholds are not reliable.

19 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov19 Radar Echo Classifier Operationally implemented on US radar product generators Fuzzy logic technique (Kessinger, AMS 2002) Uses all three moments of radar data Insight: targets that are not moving have zero velocity, and low spectrum width. High reflectivity values usually good. Those that are not moving are probably AP. Also makes use of Steiner-Smith measures Not vertical (echo-top) features (to retain tilt-by-tilt ability) Good for human users, but not for automated use

20 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov20 Radar Echo Classifier Finds the good data and the AP. But can not be used to reliably discriminate the two on a pixel-by-pixel basis.

21 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov21 Quality Control Neural Network Compute texture features on three moments. Vertical features on latest (“virtual”) volume Can clean up tilts as they arrive and still utilize vertical features. Train neural network off-line on these features to classify pixels into precip or non-precip at every scan of the radar. Use classification results to clean up the data field in real-time.

22 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov22 The set of input features Computed in 5x5 polar neighborhood around each pixel. For velocity and spectrum width: Mean Variance (Kessinger) value-mean

23 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov23 Reflectivity Features Lowest two tilts of reflectivity: Mean Variance Value-mean Square diff of pixel values (Kessinger) Homogeneity radial inflections (Steiner-Smith) echo size found through region-growing

24 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov24 Vertical Features Vertical profile of reflectivity maximum value across tilts weighted average with the tilt angle as the weight difference between data values at the two lowest scans (Fulton) echo top height at a 5dBZ threshold (Steiner-Smith) Compute these on a “virtual volume”

25 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov25 Training the Network How many patterns? Cornelius et al. (1995) used a neural network to do radar quality control Resulting classifier not useful discarded in favor of fuzzy logic Radar Echo Classifier. Used < 500 user-selected pixels to train the network. Does not capture the diversity of the data. Skewed distribution.

26 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov26 Diversity of data? Need to have data cases that cover Shallow precipitation Ice in the atmosphere AP, ground-clutter (high data values that are bad) Clear-air return Mesocyclones (low data values that are good)

27 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov27 Distribution of data Not a climatalogical distribution Most days, there is no weather, so low reflectivities (non- precipitating) predominate. We need good performance in weather situations. Need to avoid bias in selecting pixels – choose all pixels in storm echo, for example, not just the storm core Neural networks perform best when trained with equally likely classes At any value of reflectivity, both classes should be equally likely Need to find data cases to meet this criterion. Another reason why previous neural network attempts failed.

28 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov28 Distribution of training data by reflectivity values

29 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov29 Training the network Human experts classified the training data by marking bad echoes. Had access to time-sequence and knowledge of the event. Training data was 8 different volume scans that captured the diversity of the data. 1 million patterns.

30 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov30 The Neural Network Fully feed-forward neural network. Trained using resilient propagation with weight decay. Error measure was modified cross-entropy. Modified to weight different patterns differently. Separate validation set of 3 volume scans used to choose the number of hidden nodes and to stop the training.

31 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov31 Emphasis Weight the patterns differently because: Not all patterns are equally useful. Given a choice, we’d like to make our mistakes on low reflectivities. We don’t have enough “contrary” examples. Texture features are inconsistent near boundaries of storms. Vertical features unusable at far ranges. Does not change the overall distribution to a large extent.

32 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov32 Histograms of different features The best discriminants: Homogeneity Height of maximum Inflections Variance of spectrum width.

33 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov33 Generalization No way to guarantee generalization Some ways we avoided overfitting Use the validation set (not the training set) to decide: Number of hidden nodes When to stop the network training Weight-decay Limited network complexity 500,000 patterns Emphasize certain patterns

34 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov34 Untrainable data case None of the features we have can discriminate the clear-air return from good precipitation. Essentially removed the migratory birds from the training set.

35 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov35 Velocity We don’t always have velocity data. In the US weather radars, Reflectivity data available to 460km Velocity data available to 230km But higher resolution. Velocity data can be range-folded Function of Nyquist frequency Two different networks One with velocity (and spectrum width) data Other without velocity (or spectrum width) data

36 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov36 Choosing the network Training the with-velocity and without-velocity networks Shown is the validation error as training progresses for different numbers of hidden nodes Choose 5 nodes for with-velocity (210 th epoch) and 4 nodes for without-velocity (310 th epoch) networks.

37 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov37 Behavior of training error Training error keeps decreasing. Validation error starts to increase after a while. Assume that point this happens is where the network starts to get overfit.

38 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov38 Performance measure Use a testing data set which is completely independent of the training and validation data sets. Compared against classification by human experts.

39 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov39 Receiver Operating Characteristic A perfect classifier would be flush top and flush left. If you need to retain 90% of good data, then you’ll have to live with 20% of the bad data when using the QCNN Existing NWS technique forces you to live with 55% of the bad data.

40 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov40 Performance (AP test case)

41 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov41 Performance (strong convection)

42 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov42 Test case (ground clutter)

43 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov43 Test case (small cells)

44 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov44 Summary A radar-only quality control algorithm Uses texture features derived from 3 radar moments Removes bad data pixels corresponding to AP, ground clutter, clear-air impulse returns Does not reliably remove biological targets such as migrating birds. Works in all sorts of precipitation regimes Does not remove bad data except toward the edges of storms.

45 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov45 Multi-sensor Aspect There are other sensors observing the same weather phenomena. If there are no clouds on satellite, then it is likely that there is no precipitation either. Can’t use the visible channel of satellite at night.

46 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov46 Surface Temperature Use infrared channel of weather satellite images. Radiance to temperature relationship exists. If the ground is being sensed, the temperature will be ground temperature. If satellite “cloud-top” temperature is less than the surface temperature, cloud- cover exists.

47 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov47 Spatial and Temporal considerations Spatial and temporal resolution Radar tilts arrive every 20-30s High spatial resolution (1km x 1-degree) Satellite data every 30min 4km resolution Surface temperature 2 hours old 20km resolution Fast-moving storms and small cells can pose problems.

48 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov48 Spatial … For reasonably-sized complexes, both satellite infrared temperature and surface temperature are smooth fields. Bilinear interpolation is effective.

49 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov49 Temporal Estimate motion Use high-resolution radar to estimate motion. Advect the cloud-top temperature Based on movement from radar Advection has high skill under 30min. Assume surface temperature does not change 1-2 hr model forecast has no skill above persistence forecast.

50 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov50 Cloud-cover: Step 1 Satellite infrared temperature field. Blue is colder Typically higher storms A thin line of fast- moving storms A large thunderstorm complex

51 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov51 Cloud-cover: Step 4 Forecast to move east, and decrease in intensity. This forecast is made based on radar data.

52 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov52 Cloud-cover: Step 2 Combined data from 4 different radars. Two “views” of the same phenomenon – the different sensors measure different things, and have different constraints.

53 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov53 Cloud-cover: Step 3 Estimates of motion and growth-and- decay made using KMeans texture segmentation and tracking. Red – eastward motion.

54 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov54 Cloud-cover: Step 4 The forecast is for 43 minutes – time difference between satellite image and radar tilt.

55 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov55 Cloud-cover: Step 5 Surface temperature 20kmx20km spatial resolution 2 hours old Interpolated from data from weather stations around the country Best we have.

56 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov56 Cloud-cover: Step 6 Difference field White – temperature difference more than 20K. 5K is a very conservative threshold.

57 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov57 Distribution of cloud-cover Two precipitation cases May 8, 2003 July 30, 2003 Indicate that cloud- cover values more than 15K minimum.

58 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov58 Multi-sensor QC: Step 1 Original data from July 11, 2003 (KTLX) Large amount of contamination. Clear-air Probably biological

59 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov59 Multi-sensor QC: Step 2 Result of applying the radar-only neural network. Most of the clear-air contamination is gone. Possible precipitation north- west of the radar.

60 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov60 Multi-sensor QC: Step 3 Cloud-cover field Some cloud-cover north-west of the radar. Nothing to the south of the radar. 5K threshold corresponds to the light blues.

61 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov61 Multi-sensor QC: Step 4 Result of applying cloud-cover field to NN output. Small cells retained, but biological contamination removed.

62 Dec. 13, 2003Valliappa.Lakshmanan@noaa.gov62 Conclusion The radar-only neural network outperforms the currently operational quality-control technique. Can be improved even further using data from other sensors. Needs more systematic examination.


Download ppt "Dec. 13, 2003 1 Quality Control of Weather Radar Data National Severe Storms Laboratory & University."

Similar presentations


Ads by Google