Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Information Extraction Principles for Hyperspectral Data David Landgrebe Professor of Electrical & Computer Engineering Purdue University

Similar presentations


Presentation on theme: "1 Information Extraction Principles for Hyperspectral Data David Landgrebe Professor of Electrical & Computer Engineering Purdue University"— Presentation transcript:

1 1 Information Extraction Principles for Hyperspectral Data David Landgrebe Professor of Electrical & Computer Engineering Purdue University landgreb@ecn.purdue.edu A Historical Perspective Data and Analysis Factors Hyperspectral Data Characteristics Examples Summary of Key Factors Outline

2 2 1957 - Sputnik REMOTE SENSING OF THE EARTH Atmosphere - Oceans - Land Brief History 1958 - National Space Act - NASA formed 1960 - TIROS I 1960 - 1980 Some 40 Earth Observational Satellites Flown

3 3 Image Pixels Enlarged 10 Times Thematic Mapper Image

4 4 Three Generations of Sensors 6-bit data MSS 1968 8-bit data TM 1975 10-bit data Hyperspectral 1986

5 5 Systems View Sensor On-Board Processing Preprocessing Data Analysis Information Utilization Human Participation with Ancillary Data Ephemeris, Calibration, etc.

6 6 Scene Effects on Pixel

7 7 Data Representations Image Space Spectral SpaceFeature Space Sample Image Space - Geographic Orientation Feature Space - For Use in Pattern Analysis Spectral Space - Relate to Physical Basis for Response

8 8 Data Classes

9 9 SCATTER PLOT FOR TYPICAL DATA

10 10 BHATTACHARYYA DISTANCE Mean Difference TermCovariance Term

11 11 Vegetation in Spectral Space Laboratory Data: Two classes of vegetation

12 12 Scatter Plots of Reflectance

13 13 Vegetation in Feature Space

14 14 Hughes Effect G.F. Hughes, "On the mean accuracy of statistical pattern recognizers," IEEE Trans. Inform. Theory., Vol IT-14, pp. 55-63, 1968.

15 15 A Simple Measurement Complexity Example

16 16 Classifiers of Varying Complexity Quadratic Form Fisher Linear Discriminant - Common class covariance Minimum Distance to Means - Ignores second moment

17 17 Classifier Complexity - con’t Correlation Classifier Spectral Angle Mapper Matched Filter - Constrained Energy Minimization Other types - “Nonparametric” ê Parzen Window Estimators ê Fuzzy Set - based ê Neural Network implementations ê K Nearest Neighbor - K-NN ê etc.

18 18 Covariance Coefficients to be Estimated Assume a 5 class problem in 6 dimensions Normal maximum likelihood - estimate coefficients a and b Ignore correlation between bands - estimate coefficients b Ignore correlation between bands - estimate coefficients d Class 1Class 2Class 3Class 4Class 5 bbbbb a ba ba ba ba b a a ba a ba a ba a ba a b a a a b a a a ba a a ba a a b a a a b a a a a ba a a a ba a a a ba a a a ba a a a b a a a a a ba a a a a ba a a a a ba a a a a ba a a a a b Assume common covariance - estimate coefficients c and d Common Covar. d c d c c d c c c d c c c c d c c c c c d

19 19 EXAMPLE SOURCES OF CLASSIFICATION ERROR

20 20 Number of Coefficients to be Estimated Assume 5 classes and p features

21 21 Intuition and Higher Dimensional Space Borsuk’s Conjecture: If you break a stick in two, both pieces are shorter than the original. Keller’s Conjecture: It is possible to use cubes (hypercubes) of equal size to fill an n-dimensional space, leaving no overlaps nor underlaps. Science, Vol. 259, 1 Jan 1993, pp 26-27 Counter-examples to both have been found for higher dimensional spaces.

22 22 The Geometry of High Dimensional Space The Volume of a Hypercube concentrates in the corners The Volume of a Hypersphere concentrates in the outer shell

23 23 Some Implications èHigh dimensional space is mostly empty. Data in high dimensional space is mostly in a lower dimensional structure. èNormally distributed data will have a tendency to concentrate in the tails; Uniformly distributed data will concentrate in the corners.

24 24 Volume of a hypersphere = How can that be? Differential Volume at r =

25 25 How can that be? (continued) The Probability Mass at r =

26 26 MORE ON GEOMETRY The diagonals in high dimensional spaces become nearly orthogonal to all coordinate axes Implication: The projection of any cluster onto any diagonal, e.g., by averaging features could destroy information

27 27 STILL MORE GEOMETRY The number of labeled samples needed for supervised classification increases rapidly with dimensionality In a specific instance, it has been shown that the samples required for a linear classifier increases linearly, as the square for a quadratic classifier. It has been estimated that the number increases exponentially for a non-parametric classifier. For most high dimensional data sets, lower dimensional linear projections tend to be normal or a combination of normals.

28 28 A HYPERSPECTRAL DATA ANALYSIS SCHEME 200 Dimensional Data Class Conditional Feature Extraction Feature Selection Classifier/Analyzer Class-Specific Information

29 29 Finding Optimal Feature Subspaces Feature Selection (FS) Discriminant Analysis Feature Extraction (DAFE) Decision Boundary Feature Extraction (DBFE) Projection Pursuit (PP). Available in MultiSpec via WWW at: http://dynamo.ecn.purdue.edu/~biehl/MultiSpec/ Additional documentation via WWW at: http://dynamo.ecn.purdue.edu/~landgreb/publications.html

30 30 Hyperspectral Image of DC Mall HYDICE Airborne System 1208 Scan Lines, 307 Pixels/Scan Line 210 Spectral Bands in 0.4-2.4 µm Region 155 Megabytes of Data (Not yet Geometrically Corrected)

31 31 Define Desired Classes Training areas designated by polygons outlined in white

32 32 Thematic Map of DC Mall Legend OperationCPU Time (sec.)Analyst Time Display Image18 Define Classes< 20 min. Feature Extraction12 Reformat67 Initial Classification34 Inspect and Mod. Training ≈ 5 min. Final Classification33 Total164 sec = 2.7 min.≈ 25 min. Roofs Streets Grass Trees Paths Water Shadows (No preprocessing involved)

33 33 Hyperspectral Potential - Simply Stated Assume 10 bit data in a 100 dimensional space. That is (1024) 100 ≈ 10 300 discrete locations Even for a data set of 10 6 pixels, the probability of any two pixels lying in the same discrete location is vanishingly small.

34 34 Summary - Limiting Factors Preprocessing Data Analysis Information Utilization Human Participation with Ancillary Data Sensor On-Board Processing Ephemeris, Calibration, etc. Scene - The most complex and dynamic part Sensor - Also not under analyst’s control Processing System - Analyst’s choices

35 35 Limiting Factors Scene - Varies from hour to hour and sq. km to sq. km Sensor - Spatial Resolution, Spectral bands, S/N Processing System - Classes to be labeled Number of samples to define the classes Complexity of the Classifier Features to be used - Exhaustive, - Separable, - Informational Value,

36 36 Source of Ancillary Input Possibilities Ground Observations “Imaging Spectroscopy” - From the Ground - Of the Ground Previously Gather Spectra “End Members” Image Space Spectral Space Feature Space.

37 37 Use of Ancillary Input A Key Point: Ancillary input is used to label training samples. Training samples are then used to compute class quantitative descriptions Result: This reduces or eliminates the need for many types of preprocessing by normalizing out the difference between class descriptions and the data


Download ppt "1 Information Extraction Principles for Hyperspectral Data David Landgrebe Professor of Electrical & Computer Engineering Purdue University"

Similar presentations


Ads by Google