Presentation is loading. Please wait.

Presentation is loading. Please wait.

AUTOMATED TROPICAL CYCLONE EYE DETECTION USING DISCRIMINANT ANALYSIS

Similar presentations


Presentation on theme: "AUTOMATED TROPICAL CYCLONE EYE DETECTION USING DISCRIMINANT ANALYSIS"— Presentation transcript:

1 AUTOMATED TROPICAL CYCLONE EYE DETECTION USING DISCRIMINANT ANALYSIS
Robert DeMaria

2 Overview Background Motivation Current Method Objective
Proposed Method Data Algorithm Results Conclusions Future Work

3 Schematic of a Hurricane Eye
Warm ocean evaporates near ring of very strong winds near the center 2. That causes a ring of thunderstorms near the center called the eye wall. Air moves upward in the eyewall (orange arrows) 3. When storm is organized, air starts to sink in the middle causing clouds to evaporate there and cause the eye to form. From “Hurricane Science and Society” web page

4 Infrared Image of Hurricane Joaquin October 3, 2015
Note: Coldest areas are bright colors, showing eye wall. Clear area near the storm center is the eye.

5 Why Do We Care About Detecting Hurricane Eyes?
Initial eye formation is a sign a tropical cyclone is getting more organized It is often a signal that the storm is about to get much stronger Eye detection is an important step in estimating the current strength of a hurricane

6 Current Methods Hurricane hunter aircraft Dvorak Technique
sos.noaa.gov/Education/tracking.html Hurricane hunter aircraft Low availability Dvorak Technique Used world wide Produces TC intensity estimates from satellite imagery Determining if a storm has an eye is an important step in the Dvorak technique Performed every 6 hours

7 Dvorak Technique for Intensity Estimation
Uses both human judgment (with strict criteria) and automated tools Forecaster examines visible/IR/microwave imagery Forecasters subjectively classify satellite image as one of 4 basic patterns Curved band Shear pattern Central Dense Overcast Eye Properties of pattern determine intensity IE: Thickness of eyewall Size of curved band, size of CDO

8 Dvorak cont. Eye pattern selected if there is region near rotational center devoid of clouds Manual processes are time consuming Only small fraction of satellite data is used

9 Objective Replicate human-performed eye detection with automated procedure Utilize same input routinely available to forecasters Make eye detection available at more times to supplement human-performed eye detection.

10 Proposed Method Use geostationary IR data and best track data.
Use simple statistical computer vision/machine learning techniques Principal Component Analysis (PCA) Linear Discriminant Analysis (LDA) Quadratic Discriminant Analysis (QDA) Use archive of classified images (from Dvorak) as training/truth.

11 Infrared Data from Geostationary Satellites
2D Image of cloud-top temperature Available every 30 minutes around the globe

12 IR vs Microwave vs Visible
Used in addition to IR in Dvorak. Only available during the day. Adds complexity to algorithm. Microwave 3D view inside storm. Easier to determine if eye is present. Only available every 12 hr per satellite. Microwave channels vary between satellites. Can miss a storm entirely.

13 Storm “Best Track” Estimates of storm position and maximum wind speed
Updated every 6 hours from satellite and aircraft data

14 Seven Variables Used From Best Track
Position: Motion Vector: Max Wind Speed: Date:

15 Dvorak Classifications
data from CIRA archive 4109 samples at 6 hr intervals Atlantic only 991 (24%) eye cases

16 A Note About “Truth” Dvorak method has some flexibility for forecasters to interpret what they see. Using these classifications as truth will replicate any biases present in the Dvorak classifications.

17 Algorithm Subsect/unroll satellite imagery Form training/testing sets
Perform PCA for dimension reduction Select predictors (using sensitivity vector) Train QDA/LDA Use testing set to evaluate performance

18 Three Versions of Algorithm
Model 1 Best track input only Model 2 IR satellite data only Model 3 Combined best track and IR data Use sensitivity vector (defined later) to pick best set of predictors

19 Step 1) Subsecting Imagery
Cut out 80x80 pixel box centered on best track estimate. 320km x 320km

20 Unroll Imagery Unroll 80x80 pixel box to form 6400 element vector.

21 Step 2) Training/Testing Sets
Data randomly shuffled 70% used for training 30% used for testing Great care taken not to bias results using testing data.

22 Normalization Subtract mean image. Divide by standard deviation image.

23 Problem Have 4109 total samples and 6400 dimensions per image
Solution: Use PCA to reduce data to most important patterns.

24 Step 3) Principal Component Analysis
PCA generates an eigenvector for each dimension (6400 total) Eigenvalues represent variance explained by each eigenvector

25 Dimension Reduction ~95% Can select subset of eigenvectors (based on variance explained). Can project data on to subset to reduce dimension of data. IE: 6400 element vector becomes 25 element vector.

26 First 25 Eigenvectors 7 & 8 curved band

27 Dimension Reduction Can construct approximation of original data with a few EOFs

28 Step 4) Select Predictors
Model 1 7 Best track predictors Model 2 25 IR EOF predictors with highest variance explained Model 3 Sensitivity vector used to select best predictors 10 IR EOF predictors 4 Best track predictors

29 Step 5) QDA/LDA Discriminant Functions
where Σk is the covariance matrix for class k and μk is the mean for class k. LDA: where Σ is the weighted average of all Σk. Pick class k with the largest δk value Bi-variate normal distributions provide probability of being in class k

30 Sensitivity Vector Use LDA to give insight into importance of predictors Calculate change of discriminate function difference with 1 standard deviation change in each predictor value z ∆z(δ2-δ1) = ∂/∂z[(δ2-δ1)]z LDA δk are linear in x, so ∆z are constants

31 Step 6)Evaluation of Performance
Confusion matrix related metrics Fraction Correct True Positive Fraction True Negative Fraction False Positive Rate False Negative Rate

32 Additional Metrics Peirce Skill Score (PSS) Brier Skill Score (BSS)
Evaluates skill of classification compared to random guesses based on training data distrib. Brier Skill Score (BSS) Evaluates skill of probabilities compared to constant probabilities obtained from training data distrib. BSS and PSS 1 is perfect 0 is no better than no skill scheme < 0 is worse than no skill.

33 Model 1 Used only information derived from best track
lat, lon, vmax, Julian Day Storm velocity, previous change in max winds Compared QDA and LDA metrics Used sensitivity vector to understand results

34 Best Track Predictor Distributions look for differences between classes

35 Model 1Performance Metrics
Notes: About 85% correct, BSS better for LDA, PSS tiny bit better for QDA

36 Sensitivity Vector for Model 1
Note: Sign provides physical insight. For example, stronger storms or higher latitude storms more likely to have an eye.

37 Two-Predictor Model Pick top two predictors from sensitivity matrix
Vmax and lat Verification only a little worse than w\ 7 predictors Plot class boundaries to compare QDA/LDA Most of signal is in vmax QDA helps to keep middle latitude cases in but exclude very low and high latitude cases

38 Model 2 – Satellite data only
Input amplitudes (PCs) for top 25 EOFs Performance not as good as Model 1 Sensitivity vector shows which patterns help divide classes

39 Sensitivity Vector for Model 2

40 Top 4 EOFs for Model 2 Note that top 4 are all very symmetric. Negative sign of sensitivity vector for 9, 4, 15 accounts for the fact that the cold part is in the middle.

41 Model 3 – Combined IR and Best Track
Want to keep number of predictors small Pick predictors where sensitivity vector is about 20% or more of most important predictor 4 Best Track predictors Vmax, lat, eastward and northward motion 10 IR predictors

42 Model 3 Verification Results
About 90% correct classification (close to 95% accuracy of “truth”) BSS better for LDA (accuracy of probabilities) PSS better for QDA (accuracy of classifications) QDA has fewer samples for eye covariance matrix. Note use of all data for case studies. Average Performance metrics from 1000 shufflings of training/testing sets and with all cases

43 Sensitivity Vector for Model 3
Max wind is top, but IR data are the next three

44 Class Boundaries for Two-Predictor Model Vmax and IR EOF1
Right hand side isn’t really doing anything Left hand side curve does a better job of classifying weak cases.

45 Case Study Analysis Pick weak, moderate and strong cases
Arlene 2005, Danielle 2010, Katrina 2005 Use full sample to make sure all cases for each storm included Plot classifications, probabilities and truth for full life cycle Examine cases where method failed

46 Algorithm Analysis for Hurricane Danielle
x x

47 Reasons for Mis-Classifications
Center position was not accurate Possible fix by repositioning Cold clouds were displaced from center, usually due to wind shear Wind shear vector is known, use as additional input or to rotate coordinates of IR data

48 Conclusions LDA and QDA can be used to objectively identify tropical cyclone eyes using commonly available data IR satellite imagery and basic storm parameters 90% success rate LDA is better for estimating probabilities QDA is better for classification Most important input is Vmax IR data more important than all other best track inputs

49 Future Work Develop method to re-center images. Use wind shear data.
Predict eye development using probabilities. Use output to help with rapid intensification forecasting. Generalize method to estimate all four Dvorak scene types.

50 Extra Slides

51 Step 5) QDA/LDA Need estimate of probability of being in class k, given input vector x: P(C=k | x) Bayes rule relates P(C=k | x) to P(x | C=k) Fit bivariate normal distributions to get P(x) for each class Take natural log of P(C=k | x) to get discriminate function for each class For LDA, assume class covariance matrices are equal Makes discriminant function linear in x

52 Danielle Misclassified Images
False Positives: False Negatives:

53 Algorithm Analysis for Hurricane Katrina
x x X X

54 Katrina Misclassified Images
False Negatives:


Download ppt "AUTOMATED TROPICAL CYCLONE EYE DETECTION USING DISCRIMINANT ANALYSIS"

Similar presentations


Ads by Google