Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification & Vegetation Indices

Similar presentations


Presentation on theme: "Classification & Vegetation Indices"— Presentation transcript:

1 Classification & Vegetation Indices
Remote Sensing Part 4 Classification & Vegetation Indices

2 Classification Introduction
Humans are classifiers by nature - we’re always putting things into categories To classify things, we use sets of criteria Examples: Classifying people by age, gender, race, job/career, etc. Criteria might include appearance, style of dress, pitch of voice, build, hair style, language/lexicon, etc. Ambiguity comes from: 1) our classification system (i.e., what classes we choose) 2) our criteria (some criteria don’t differentiate people with complete accuracy) 3) our data (i.e., people who fit multiple categories and people who fit no categories)

3 Non-Remote Sensing Classification Example
“Sorting incoming Fish on a conveyor according to species using optical sensing” Sea bass Species Salmon ** The following data are just hypothetical

4 Methods Set up a camera and take some sample images to extract features Length Lightness Width Number and shape of fins Position of the mouth, etc…

5 Scanning the Fish

6 Classification #1 Use the length of the fish as a possible feature for discrimination

7 Fish length alone is a poor feature for classifying fish type
Using only length we would be correct 50-60% of the time That’s not great because random guessing (i.e., flipping a coin) would be right ~50% of the time if there are an equal number of each fish type

8 Classification #2 Use the lightness (i.e., color) of the fish as a possible feature for discrimination

9 Fish lightness alone is a pretty good feature for classifying fish by type
Using only lightness we would be correct ~ 80% of the time

10 Classification #3 Use the width & lightness (i.e., color) of the fish as possible features for discrimination

11 Fish lightness AND fish width do a very good job of classifying fish by type
Using lightness AND width we would be correct ~90% of the time

12 How does this relate to remote sensing?
Instead of fish types, we are typically interested in land cover For example: forests, crops, urban areas Instead of fish characteristics we have reflectance in the spectral bands collected by the sensor For example: Landsat TM bands 1-6 instead of fish length, width, lightness, etc.

13

14

15 Imagery Classification
Two main types of classification Unsupervised Classes based on statistics inherent in the remotely sensed data itself Classes do not necessarily correspond to real world land cover types Supervised A classification algorithm is “trained” using ground truth data Classes correspond to real world land cover types determined by the user

16 Notes For ease of display the following examples show just 2 bands:
one band on the X-axis one band on the Y-axis In reality computers use all bands when doing classifications These types of graphs are often called feature space The points displayed on the graphs relate to pixels from an image The term cloud sometimes refers to the amorphous blob(s) of pixels in the feature space

17 Unsupervised Classification
Classes are created based on the locations of the pixel data in feature space 255 v Infrared BV’s Red BV’s 255

18 A Computer Algorithm Finds Clusters
Unsupervised Classification A Computer Algorithm Finds Clusters 255 v Infrared BV’s Red BV’s 255

19 Unsupervised Classification
Attribution phase – performed by human 255 agriculture Infrared BV’s forest Soil water Red BV’s 255

20 Problems with Unsupervised Classification
The computer may consider these 2 clusters (forest and agriculture) as one cluster The computer may consider this cluster (soil) to be 2 clusters 255 v Infrared BV’s Red BV’s 255

21 Supervised Classfication
We “train” the computer program using ground truth data I.e., we tell the computer what our classes (e.g., trees, soil, agriculture, etc.) “look like” Coniferous trees Deciduous trees

22 Supervised Classification
Sample pixels 255 Other pixels v Infrared BV’s Red BV’s 255

23 Supervised Classification
No attribution phase necessary because we define the classes before-hand 255 agriculture Infrared BV’s forest Soil water Red BV’s 255

24 Problems with Supervised Classification
agri 255 v forest v Infrared BV’s Soil water Red BV’s 255 What’s this?

25 What is the computer actually doing?
This classification generates statistics for the center, the size, and the shape of the sample pixel clouds The computer will then classify all the rest of the pixels in the image using these statistical values

26 Example: Remote Sensing of Clouds
A cloud for clouds (feature space vs. real clouds)… confusing I know. The point is that for REAL data nice, distinct clusters are not the norm

27 Supervised Classification: Training Samples
Users survey (using GPS) areas of “pure” land cover for all possible land cover types in an image OR Users “heads-up” digitize “pure” areas using expert knowledge and/or higher spatial resolution imagery The rest of the image is classified based on the spectral characteristics of the training sites

28 Classification of Nang Rong imagery
Shown are Landsat MSS,TM,and ETM Image Classification Results (a) Nov 1979 Upland Ag Forest Rice (a) Nov 1992 Water Built-up (c) Nov 2001

29 Land Use/cover Change in Nang Rong, Thailand
1994 1954

30 Example Classification Results (Bangkok, Thailand)

31 Accuracy Assessments After classifying an image we want to know how well the classification worked To find out we must conduct an accuracy assessment

32 How are accuracy assessments done?
Basically we need to compare the classification results with real land cover As with training data, the real land cover data can be field data (best) or samples from higher spatial resolution imagery (easier) What points should we use for the accuracy assessment? Possible options (there are others) Random points Stratified random points (each class represented with an equal number of points)

33 Classification Challenges
What problem might occur when gathering points for an accuracy assessment (and to a lesser extent, training areas)? Can we use the same points for the accuracy assessment that we used to train the classification?

34 Ikonos Imagery: Glacier National Park
Because this image is of an alpine area, an illumination corrections was required. This is evident because we can effectively see the topography just due to different brightness values, similar to how we can recognize people’s faces in different light conditions, but computers have a harder time with these differences so we need to correct for them. This is the overhead view of the same image, but note that even without the DEM drape, we can distinguish topography from illumination differences. For this reason I preformed an illumination correction on the image.

35 Classification Results
Following Illumination correction we preformed a classification. Several methods were tested before finally selecting an unsupervised approach in which 30 classes were eventually reduced to just 5 which we felt balanced considerations for accuracy while still capturing the fundamental cover types present in the ATE. The assignment of cover types to the classes was accomplished using field data acquired during 2005

36 Accuracy Assessment Table
Rows are the reference data, columns are the classified data Values on the diagonal are correctly classified The values in red are the producer’s accuracy for each class A.k.a. errors of omission E.g., “how many pixels that ARE water (13) are classified AS water (12)” The values in blue are the user’s accuracy for each class A.k.a. the errors of commission E.g., “how many pixels classified AS water (14) ARE water (12)” Overall accuracy = # of correctly classified pixels / total # of pixels The Kappa statistic is basically the overall accuracy adjusted for how many pixels we would expect to correctly classify by chance alone AA based on 600 sample points, 300 random on the landscape and 300 within the sample ATE transects I’ll describe in a moment. Overall accuracy was strong, as was the accuracy for all classes, with the exception of the mixed & deciduous class. This is not surprising because this class primarily represents patches of trees (often with krummholz growth forms) that were smaller than the 10-meter pixel size.

37 Vegetation Indices

38 Vegetation Indices Normalized Differential Vegetation Index (NDVI)
Takes advantage of the “red edge” of vegetation reflectance that occurs between red and near infrared reflectance (NIR) NDVI = (NIR – Red) / (NIR + Red) Many more indices with many variants exist (lots of acronyms like SAVI, etc.)

39 Normalized Difference Vegetation Index (NDVI)
Mention LAI Often, the more leaves of vegetation there are present, the bigger the contrast in reflectance in the red and near-infrared spectra NDVI most accurately approximates the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)

40 NDVI from AVHRR Apr 24-May 7 Feb 27-Mar 12 Jul 17-Jul 30 Jun 19-Jul 2
Aug 14-Aug 27 Nov 6- Nov19

41 NDVI and Precipitation Relationships
A: 12 Apr-2 May 1982 B: 5 to 25 Jul 1982 C: 22 Sep to 17 Oct 1982 D: 10 Dec Jan 1983 Expansion and contraction of the Sahara

42 Monitoring forest fire
Pre-forest fire Post-forest fire Burned area identified from space using NDVI


Download ppt "Classification & Vegetation Indices"

Similar presentations


Ads by Google