Presentation is loading. Please wait.

Presentation is loading. Please wait.

Imola K. Fodor and Chandrika Kamath Center for Applied Scientific Computing Lawrence Livermore National Laboratory IPAM Workshop January, 2002 Mining the.

Similar presentations

Presentation on theme: "Imola K. Fodor and Chandrika Kamath Center for Applied Scientific Computing Lawrence Livermore National Laboratory IPAM Workshop January, 2002 Mining the."— Presentation transcript:

1 Imola K. Fodor and Chandrika Kamath Center for Applied Scientific Computing Lawrence Livermore National Laboratory IPAM Workshop January, 2002 Mining the FIRST Astronomical Survey

2 Sapphire/IKF 2 CASC Faint Images of the Radio Sky at Twenty- Centimeters (FIRST) l On-going sky survey, started in 1993 l When completed, will cover more than 10,000 deg to a flux density limit of 1.0 mJy (milli-Jansky) l Current coverage is about 8,000 deg –more than 32,000 two-million pixel images l There are about 90 radio sources/deg l Data available at NRAO Very Large Array (VLA)

3 Sapphire/IKF 3 CASC One goal of FIRST is to identify radio galaxies with a bent-double morphology l A bent-double galaxy is … l Problem: there is no definition of “bent-double” l Rough characteristic: there is a radio emitting “core”, along with a number of (not necessarily two!) side- components that are “bent” around the core l Astronomers search manually for bent-doubles Bent-doubles Non bent-doubles

4 Sapphire/IKF 4 CASC Sapphire: use data mining to enhance the visual search for bent-doubles l Use galaxies classified by astronomers to model the binary response variable Y l Find features X and model f(X) with desired accuracy l Aim: 10% misclassification error, as manual classification is not more accurate Pre-processingPattern recognition FIRST images Bent/nonbent coordinates “Good” features Denoising Feature extraction Dimension reduction Classification

5 Sapphire/IKF 5 CASC The FIRST catalog is based on fitting 2D elliptical Gaussians to denoised images Image Map 1550 pixels 1150 pixels 64 pixels 32K image maps, 7.1MB each Catalog 720K entries Radio source (RS) Catalog entry (CE)

6 Sapphire/IKF 6 CASC A first pre-processing step is to identify potential features to discriminate bents l For the FIRST data, we extracted various features based on –radio intensities, angles, distances, … l For galaxies with 3 entries –a total of 103 features –three sets of single features, three pairs of double features, and the triple features –possible redundancies l Reduce dimension using –domain knowledge –EDA –PCA –GLM step-wise model selection

7 Sapphire/IKF 7 CASC Triple features for three catalog entries A B C M N P a c b

8 Sapphire/IKF 8 CASC Using exploratory data analysis (EDA), we reduced the number of features to 25 l Use EDA techniques such as –box-plots –multivariate plots –parallel-coordinate plots –correlation matrix l to –explore the data –find unusual observations –eliminate correlations among the features l Call these EDA features

9 Sapphire/IKF 9 CASC Example parallel coordinate plot: nine variables split by bentness category BentNon-bent lxlx x X : unusual xxx 3/2 sky regions for bent/non-bent large negative correlation

10 Sapphire/IKF 10 CASC Principal component analysis (PCA) finds linear combinations of variables Suppose we have p features and we want a linear combination with max. variance By the spectral decomposition theorem, the first PC, has maximal variance, and The total variance is preserved, Dimension reduction: use first k PCs as new “features” orthogonal,

11 Sapphire/IKF 11 CASC We used PCA differently to reduce the number of original features to 20 l The first 20 PCs explain 90% of the variance l PCs are hard to interpret – instead of using 20 PCs, keep 20 of the original variables l Multivariate Analysis (Mardia, Kent, Bibby) –consider the last PC, with the smallest variance –find the largest (in abs value) coefficient, and discard the corresponding original variable –repeat the procedure w/ the second-to-last PC, and iterate until only 20 variables remain l Call these PCA features

12 Sapphire/IKF 12 CASC We also used step-wise model selection to reduce the number of variables l Binary response: Y = {bent, non-bent} l Explanatory variables: features l Logistic regression, step-wise model selection with the AIC as a measure of goodness (minimize -log- likelihood, with a penalty term for large models) l Cannot use all 103 features because of correlations l We identified the features selected by EDA or PCA –stepwise model selection => GLM 2 features (25) l We identified the features selected by EDA and PCA –stepwise model selection => GLM 3 features (10) –stepwise model selection, including second-order interactions => GLM 4 features (9, +5 interactions)

13 Sapphire/IKF 13 CASC Pattern recognition uses the features from pre-processing to classify the data Extract Features Training data Create Classifier Decision Tree GLM Check for Accuracy Apply Classifier to Unclassified Data Extract Features for Unclassified Data Show Results and Obtain Score Update Training Data An iterative and interactive classification process

14 Sapphire/IKF 14 CASC We use decision trees to classify the radio sources into bents and non-bents l Use information gain to split l : set of examples at a node l : number of classes l : split into two l : number of class in radius > a? color?

15 Sapphire/IKF 15 CASC Decision tree created with all the features: Tree 1 l Resubstitution error, train/test (90%) set: 2.8% l Cross-validation error, train/validate (10%) set: 5.3% Leaf node w/ 11 non-bents Leaf node w/ 4 bents Leaf node w/ 145 items, (145-4) bents, and 4 non-bents

16 Sapphire/IKF 16 CASC Decision tree created with the EDA and PCA features: Tree 2 l Resubstitution error: 1.7% l Cross-validation error: 5.3%

17 Sapphire/IKF 17 CASC Decision tree created with the GLM 3 features: Tree 3 l Resubstitution error: 2.8% l Cross-validation error: 0% l Using fewer, well-selected variables results in smaller and more accurate trees

18 Sapphire/IKF 18 CASC We also used generalized linear models (GLMs) to classify the galaxies l Linear models explain response variables in terms of linear combinations of explanatory variables l Least-squares estimate solves l No restrictions on the range of fitted values l GLMs allow such restrictions by modeling where g() is a monotone increasing link function

19 Sapphire/IKF 19 CASC Logistic regression is a special GLM suitable for modeling binary responses l Y={0,1} l Logit link and variance functions l Likelihood non-linear in parameters, no closed-form solution: iteratively reweighted least squares to find l Given, where is {0,1} according to {a=False, a=True}, and the fraction is generally taken to be 0.5

20 Sapphire/IKF 20 CASC GLM created with the GLM 2 features

21 Sapphire/IKF 21 CASC GLM created with the GLM 3 features

22 Sapphire/IKF 22 CASC GLM created with the GLM 4 features

23 Sapphire/IKF 23 CASC Tree 1Tree 2Tree 3 Mean11.1%9.5%8.3% SE0.4% Misclassification errors based on 10 ten-fold cross-validations in the training set GLM 4GLM 3GLM %0.91%4.34%SE 4.00%7.84%18.74%Mean Misclassification errors of best models are below the desired 10% in training set l Careful selection of variables reduces error l Trees are less sensitive to input features than GLMs l GLM 4 has lowest misclassification errors

24 Sapphire/IKF 24 CASC Our methods identified the “interesting” part of the FIRST dataset l 15,059 three-entry radio sources in the 2000 catalog l 2,577 labeled as bent by all six methods l Astronomers can start by exploring the smaller set l Visually explore random samples to assess the percentage of false positives and missed bents Bent Non-bent All 6GLM4GLM3GLM2Tree3Tree2Tree1 Classification results for the entire 2000 catalog

25 Sapphire/IKF 25 CASC Example classifications for previously unlabeled galaxies are encouraging l The labels commonly assigned by the six methods are correct in the examples below Bent Non-bent

26 Sapphire/IKF 26 CASC Summary l Described how data mining can help identify radio galaxies with bent-double morphology l Illustrated specific data mining steps –data pre-processing is very crucial l In our experience, data mining is semi-automatic –interaction and feedback required at many stages –domain knowledge is essential l Multi-disciplinary collaboration is challenging, but rewarding –astronomy - computer science - statistics l There is always room for improvement –alternative techniques –your feedback welcome!

27 Sapphire/IKF 27 CASC The Sapphire team: supporting a multi- disciplinary endeavor l Chandrika Kamath (Project Lead) l Erick Cantú-Paz l Imola K. Fodor l Nu A. Tang l Thanks to the FIRST scientists: Robert Becker, Michael Gregg, David Helfand, Sally Laurent- Muehleisen, and Rick White UCRL-JC This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract W-7405-Eng-48.

Download ppt "Imola K. Fodor and Chandrika Kamath Center for Applied Scientific Computing Lawrence Livermore National Laboratory IPAM Workshop January, 2002 Mining the."

Similar presentations

Ads by Google