Presentation is loading. Please wait.

Presentation is loading. Please wait.

Summer Session 09 August 2011. Tips for the Final Exam Make sure your answers clear, without convoluted language. Read questions carefully – are you answering.

Similar presentations


Presentation on theme: "Summer Session 09 August 2011. Tips for the Final Exam Make sure your answers clear, without convoluted language. Read questions carefully – are you answering."— Presentation transcript:

1 Summer Session 09 August 2011

2 Tips for the Final Exam Make sure your answers clear, without convoluted language. Read questions carefully – are you answering the entire question? Be thorough! Synthesize material you have learned in this class. Think about applications of remote sensing science – tie in your lab experiences with what you have learned from the book/lecture. Likely format: 14 multiple choice - 2 points each = 28 10 short answer – 4 points (9) or 6 points (1) each = 42 You will have choices for most – 1A or 1B, 2A or 2B, etc. 2 or 3 essay questions – total of 50 points You will probably have choices here, too. Longer, more challenging, more comprehensive. Extra Credit Assignment  Lab worth 3%, due 18 August before final exam

3

4

5 Goals for image processing Identify Features, single or multiple characteristics, areas of change, etc. Quantify Spatial extent of features Magnitude of features (levels; e.g. fire  severity, or extent of burn) Analyze Derive meaningful information

6 Image Processing – 3 Primary Tasks 1. Identify and map a specific feature of interest on the imagery e.g. identify deforested areas 2. Create a map with multiple categories or levels e.g. create a land cover map 3. Create maps that represent different levels of a surface/atmosphere characteristic estimate net primary production in oceanic regions A map of different levels of the same characteristic e.g. percent tree cover – 0-100%

7 Single Characteristic Multiple Categories Different levels of a single characteristic

8 Image Classification The process of automatically dividing all pixels within a digital remote sensing image into: 1. Land or surface-cover categories 2. Information themes or quantification of specific surface characteristics

9 From: http://www.fes.uwaterloo.ca/crs/geog376.f2001/ ImageAnalysis/ImageAnalysis.html#ImageProcessingSteps

10 1: Evergreen Needleleaf Forests; 2: Evergreen Broadleaf Forests; 3: Deciduous Needleleaf Forests; 4: Deciduous Broadleaf Forests; 5: Mixed Forests; 6: Woodlands; 7: Wooded Grasslands/Shrubs; 8: Closed Bushlands or Shrublands; 9: Open Shrublands; 10: Grasses; 11: Croplands; 12: Bare; 13: Mosses and Lichens http://www.geog.umd.edu/landcover/8km-map.html

11 Radar image classification

12 Cropland Probability – 0-100% Pittman et al. (2010)

13 SeaWiFS (Sea-viewing Wide Field-of-view Sensor) image classification: chlorophyll concentration in the Gulf of Mexico

14 Image Slicing and Thresholding Thresholding of digital values i.e. % reflectace or DN Thresholding of transformed values e.g. NDVI, NBR, etc.

15 Image classification based on average data values in a single channel is a risky undertaking

16 14 151715 161318 16 17 15 14 1315 1112 Average = 14.75 Range = 11 - 18 Most land surfaces have a range of values, not a single value This block represents a single land cover type

17 Lillesand and Kiefer Figure 7-46 Unless the average values are very far apart, a significant number of mis- classified pixels will be produced as a result of a single band thresholding

18 Lillesand and Kiefer Figure 7-11 When the differences between features of interest are high, it is possible to use a simple threshold to discriminate between the features (water vs. land surface)

19 Lillesand and Kiefer Figure 7-11 Water Land The range in digital values for these two surfaces do not overlap, so you can use a level slice to classify your image into two categories > 40 = Land < 40 = water

20 Two-step level slicing or thresholding Step 1 – Estimate the range of values of a given surface characteristic on a single band e.g. vegetation on Landsat 7 ETM+ Band 4 Step 2 – create discrete levels of the characteristic “slice” up the histogram

21 Example of 2-step level slice With AVHRR data, greenness can be estimated from the Normalized Difference Vegetation Index (NDVI) Greenness = (Near IR – Red) (Near IR + Red)

22 This greenness map was created by level slicing NDVI Values

23 Image Classification Because we have seen the limitations of density slicing, or single-band classifications… Let’s look at how exactly multiple bands of information are combined to perform a multiband classification…

24 Challenge in remote sensing – how does one capture the information content that is available in the different channels of the digital image?

25

26

27 Lillesand and Kiefer Figure 7-39 If you find this interesting, read up on “the tasseled cap transformation”

28 Image Classification The process of automatically dividing all pixels within a digital remote sensing image into discrete categories Supervised vs. unsupervised

29 Supervised vs. Unsupervised Classification Supervised classification – a procedure where the analyst guides or supervises the classification process by specifying numerical descriptors of the land cover types of interest Unsupervised classification – the computer is allowed to aggregate groups of pixels into like clusters based upon different classification algorithms

30 Training Areas and Supervised Classification 1. Specified by the analyst to represent the land cover categories of interest 2. Used to compile a numerical “interpretation key” that describes the spectral attributes of the areas of interest 3. Each pixel in the scene is compared to the training areas, and then assigned to one of the categories

31 Multiband Classification Approaches Minimum distance classifiers* Parallelepiped classifiers* Maximum likelihood classifiers* Decision trees* Neural networks *covered in class (know for exam)

32 Minimum Distance Classifiers f f f f f Step 1 – calculate the average value for each training area in each band + c c c c c +

33 Minimum Distance Classifiers f f f f f Step 2 – for each unclassified pixel, calculate the distance to the average for each training area The unclassified pixel is place in the group to which it is closest + c c c c c + * * - Unclassified pixel

34 Minimum Distance Classifiers Lillesand and Kiefer Figure 7-40

35 Advantages/Disadvantages of Minimum Distance Classifiers Advantages Simple and computationally efficient Disadvantages – Does not factor in the fact that some categories have a large variance e.g. pixel #2 on last slide ended up on sand, but could have been urban!

36 Parallelepiped Classifiers f f f f f Step 1 – define the range of values in each training area and use these ranges to construct an n-dimensional box (a parallelepiped) around each class c c c c c

37 Lillesand and Kiefer Figure 7-41 a pixel falls into a category if it falls within the N- dimensional box, otherwise it is unclassified a problem with the PP is that there can be overlap between categories

38 Lillesand and Kiefer Figure 7-41 Fix the overlappinjg regions with a parallelepiped classifier with a stepped decision region boundary

39 Maximum likelihood classifiers Based on a probability function derived from a statistical distribution of reflectance values

40 Lillesand and Kiefer Figure 7-46 Plots of DN values fit create a histogram which usually fit a certain statistical distribution

41 There are statistical functions or equations which describe the distribution of data

42 Lillesand and Kiefer Figure 7-43 a 3-dimensional normal curve fit to the data values from an example of Digital values from the two channels of the Landsat scene

43 Steps for maximum likelihood classifier Determine the n-dimensional curve for a particular feature Fit it to normal distribution Use statistical algorithms to describe them Define the levels of probability acceptable for classification of a given pixel

44 Lillesand and Kiefer Figure 7-44 Maximum likelihood classifiers the equal probability contours that were constructed around the different training areas are used to classify the images Max likelihood classifier selects the category with highest probability for a pixel

45 Unsupervised classification Lack of a priori information on what types of land or vegetation cover types exist within a region BUT: it may be difficult to interpret the computer generated classes

46 Unsupervised Classification Lillesand and Kiefer Figure 7-51 Allow the computer to identify clusters based on different classification procedures

47 Hybrid Classification Approach 1. Perform an unsupervised classification to create a number of land cover categories within the area of interest 2. Carry out field surveys to identify the land cover type represented by different unsupervised clusters 3. Use a supervised approach to combine unsupervised clusters into similar land cover categories

48 Sources of Uncertainty in Image Classification 1. Non-representative training areas 2. High variability in the spectral signatures for a land cover class 3. Mixed land cover within the pixel area

49 Mixed pixels In many cases, the IFOV of a sensor will include multiple land cover categories – e.g., a mixed pixel Mixed pixels contribute to classification errors

50 f f f f f c c c c c d d d d d m m m m m Question – How do different algorithms treat mixed pixels? In some cases, mixed pixels are close enough to a specific category, which leads to misclassifications

51 Decision Tree Classifier Decision tree classifiers use a simple set of rules to divide pixels into different land cover types

52 1: Evergreen Needleleaf Forests; 2: Evergreen Broadleaf Forests; 3: Deciduous Needleleaf Forests; 4: Deciduous Broadleaf Forests; 5: Mixed Forests; 6: Woodlands; 7: Wooded Grasslands/Shrubs; 8: Closed Bushlands or Shrublands; 9: Open Shrublands; 10: Grasses; 11: Croplands; 12: Bare; 13: Mosses and Lichens http://www.geog.umd.edu/landcover/8km-map.html

53 Classification logic a) b) c)

54

55

56

57

58

59 www.eomf.ou.edu They used time-series VI’s to look for # of cycles (croppings) per year to map agricultural intensification

60 Accuracy assessment & Validation It is necessary to provide information about the accuracy of a given mapping approach different applications require different levels of accuracy For land cover classifications, no global map has ever exceeded 70% accuracy And what is accuracy in the context of remote sensing, anyway? (Keep this question in mind) Relative to other maps? Relative to the ground? There are efforts to standardize validation approaches http://landval.gsfc.nasa.gov/pdf/GlobalLandCoverValidation.p df <-- “best practices” document http://landval.gsfc.nasa.gov/pdf/GlobalLandCoverValidation.p df

61 Types of accuracy assessment/ validation 1. Inventory assessment - e.g. how many acres of forest we get through our classification vs. how many acres of forest is reported by forest service 2. Confusion matrix - provides information on both the accuracy of the amount mapped and the accuracy of geographic distribution - You will see this in your lab today!

62 Inventory assessment Advantages: - fairly straightforward method; - comparison against ground data. Disadvantages: - strongly depends on the accuracy of provided information (underreporting, not up-to-date data, etc.)

63 Confusion matrix Overall Accuracy = (302713/335386) 90.2581% Kappa Coefficient = 0.8322 Ground Truth (Pixels) Class Unclassifiedurban [Red] 1forest [Greenwater [Blue] Total Unclassified 0 0 0 0 0 urban [Red] 1 0 24897 766 0 25663 forest [Green 0 22061 161877 0 183938 water [Blue] 0 2012 7834 115939 125785 Total 0 48970 170477 115939 335386 Ground Truth (Percent) Class Unclassifiedurban [Red] 1forest [Greenwater [Blue] Total Unclassified 0.00 0.00 0.00 0.00 0.00 urban [Red] 1 0.00 50.84 0.45 0.00 7.65 forest [Green 0.00 45.05 94.96 0.00 54.84 water [Blue] 0.00 4.11 4.60 100.00 37.50 Total 0.00 100.00 100.00 100.00 100.00 Class Commission Omission Commission Omission (Percent) (Percent) (Pixels) (Pixels) Unclassified 0.00 0.00 0/0 0/0 urban [Red] 1 2.98 49.16 766/25663 24073/48970 forest [Green 11.99 5.04 22061/183938 8600/170477 water [Blue] 7.83 0.00 9846/125785 0/115939 Class Prod. Acc. User Acc. Prod. Acc. User Acc. (Percent) (Percent) (Pixels) (Pixels) Unclassified 0.00 0.00 0/0 0/0 urban [Red] 1 50.84 97.02 24897/48970 24897/25663 forest [Green 94.96 88.01 161877/170477 161877/183938 water [Blue] 100.00 92.17 115939/115939 115939/125785

64 Confusion Matrix Advantages: - provides statistics for both inventory and geographic information Disadvantages: - limited availability of comparable ground truth data (very difficult and expensive to collect, not available in many areas)


Download ppt "Summer Session 09 August 2011. Tips for the Final Exam Make sure your answers clear, without convoluted language. Read questions carefully – are you answering."

Similar presentations


Ads by Google